title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 2. Configuring Provisioning Resources
Chapter 2. Configuring Provisioning Resources 2.1. Provisioning Contexts A provisioning context is the combination of an organization and location that you specify for Satellite components. The organization and location that a component belongs to sets the ownership and access for that component. Organizations divide Red Hat Satellite components into logical groups based on ownership, purpose, content, security level, and other divisions. You can create and manage multiple organizations through Red Hat Satellite and assign components to each individual organization. This ensures Satellite Server provisions hosts within a certain organization and only uses components that are assigned to that organization. For more information about organizations, see Managing Organizations in the Content Management Guide . Locations function similar to organizations. The difference is that locations are based on physical or geographical setting. Users can nest locations in a hierarchy. For more information about locations, see Managing Locations in the Content Management Guide . 2.2. Setting the Provisioning Context When you set a provisioning context, you define which organization and location to use for provisioning hosts. The organization and location menus are located in the menu bar, on the upper left of the Satellite web UI. If you have not selected an organization and location to use, the menu displays: Any Organization and Any Location . Procedure Click Any Organization and select the organization. Click Any Location and select the location to use. Each user can set their default provisioning context in their account settings. Click the user name in the upper right of the Satellite web UI and select My account to edit your user account settings. CLI procedure When using the CLI, include either --organization or --organization-label and --location or --location-id as an option. For example: This command outputs hosts allocated for the Default_Organization and Default_Location. 2.3. Creating Operating Systems An operating system is a collection of resources that define how Satellite Server installs a base operating system on a host. Operating system entries combine previously defined resources, such as installation media, partition tables, provisioning templates, and others. Importing operating systems from Red Hat's CDN creates new entries on the Hosts > Operating Systems page. To import operating systems from Red Hat's CDN, enable the Red Hat repositories of the operating systems and synchronize the repositories to Satellite. For more information, see Enabling Red Hat Repositories and Synchronizing Repositories in Managing Content . You can also add custom operating systems using the following procedure. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Operating systems and click New Operating system. In the Name field, enter a name to represent the operating system entry. In the Major field, enter the number that corresponds to the major version of the operating system. In the Minor field, enter the number that corresponds to the minor version of the operating system. In the Description field, enter a description of the operating system. From the Family list, select the operating system's family. From the Root Password Hash list, select the encoding method for the root password. From the Architectures list, select the architectures that the operating system uses. Click the Partition table tab and select the possible partition tables that apply to this operating system. Optional: if you use non-Red Hat content, click the Installation Media tab and select the installation media that apply to this operating system. For more information, see Adding Installation Media to Satellite . Click the Templates tab and select a PXELinux template , a Provisioning template , and a Finish template for your operating system to use. You can select other templates, for example an iPXE template , if you plan to use iPXE for provisioning. Click Submit to save your provisioning template. CLI procedure Create the operating system using the hammer os create command: 2.4. Updating the Details of Multiple Operating Systems Use this procedure to update the details of multiple operating systems. This example shows you how to assign each operating system a partition table called Kickstart default , a configuration template called Kickstart default PXELinux , and a provisioning template called Kickstart Default . Procedure On Satellite Server, run the following Bash script: PARTID=USD(hammer --csv partition-table list | grep "Kickstart default," | cut -d, -f1) PXEID=USD(hammer --csv template list --per-page=1000 | grep "Kickstart default PXELinux" | cut -d, -f1) SATID=USD(hammer --csv template list --per-page=1000 | grep "provision" | grep ",Kickstart default" | cut -d, -f1) for i in USD(hammer --no-headers --csv os list | awk -F, {'print USD1'}) do hammer partition-table add-operatingsystem --id="USD{PARTID}" --operatingsystem-id="USD{i}" hammer template add-operatingsystem --id="USD{PXEID}" --operatingsystem-id="USD{i}" hammer os set-default-template --id="USD{i}" --config-template-id=USD{PXEID} hammer os add-config-template --id="USD{i}" --config-template-id=USD{SATID} hammer os set-default-template --id="USD{i}" --config-template-id=USD{SATID} done Display information about the updated operating system to verify that the operating system is updated correctly: 2.5. Creating Architectures An architecture in Satellite represents a logical grouping of hosts and operating systems. Architectures are created by Satellite automatically when hosts check in with Puppet. Basic i386 and x86_64 architectures are already preset in Satellite. Use this procedure to create an architecture in Satellite. Supported Architectures Only Intel x86_64 architecture is supported for provisioning using PXE, Discovery, and boot disk. For more information, see the Red Hat Knowledgebase solution Supported architectures and provisioning scenarios in Satellite 6 . Procedure In the Satellite web UI, navigate to Hosts > Architectures and click Create Architecture . In the Name field, enter a name for the architecture. From the Operating Systems list, select an operating system. If none are available, you can create and assign them under Hosts > Operating Systems . Click Submit . CLI procedure Enter the hammer architecture create command to create an architecture. Specify its name and operating systems that include this architecture: 2.6. Creating Hardware Models Use this procedure to create a hardware model in Satellite so that you can specify which hardware model a host uses. Procedure In the Satellite web UI, navigate to Hosts > Hardware Models and click Create Model . In the Name field, enter a name for the hardware model. Optionally, in the Hardware Model and Vendor Class fields, you can enter corresponding information for your system. In the Info field, enter a description of the hardware model. Click Submit to save your hardware model. CLI procedure Create a hardware model using the hammer model create command. The only required parameter is --name . Optionally, enter the hardware model with the --hardware-model option, a vendor class with the --vendor-class option, and a description with the --info option: 2.7. Using a Synced Kickstart Repository for a Host's Operating System Satellite contains a set of synchronized kickstart repositories that you use to install the provisioned host's operating system. For more information about adding repositories, see Syncing Repositories in the Content Management Guide . Use this procedure to set up a kickstart repository. Prerequisites You must enable both BaseOS and Appstream Kickstart before provisioning. Procedure Add the synchronized kickstart repository that you want to use to the existing Content View, or create a new Content View and add the kickstart repository. For Red Hat Enterprise Linux 8, ensure that you add both Red Hat Enterprise Linux 8 for x86_64 - AppStream Kickstart x86_64 8 and Red Hat Enterprise Linux 8 for x86_64 - BaseOS Kickstart x86_64 8 repositories. If you use a disconnected environment, you must import the Kickstart repositories from a Red Hat Enterprise Linux binary DVD. For more information, see Importing Kickstart Repositories in the Content Management Guide . Publish a new version of the Content View where the kickstart repository is added and promote it to a required lifecycle environment. For more information, see Managing Content Views in the Content Management Guide . When you create a host, in the Operating System tab, for Media Selection , select the Synced Content checkbox. To view the kickstart tree, enter the following command: 2.8. Adding Installation Media to Satellite Installation media are sources of packages that Satellite Server uses to install a base operating system on a machine from an external repository. You can use this parameter to install third-party content. Red Hat content is delivered through repository syncing instead. You can view installation media by navigating to Hosts > Provisioning Setup > Installation Media . Installation media must be in the format of an operating system installation tree and must be accessible from the machine hosting the installer through an HTTP URL. By default, Satellite includes installation media for some official Linux distributions. Note that some of those installation media are targeted for a specific version of an operating system. For example CentOS mirror (7.x) must be used for CentOS 7 or earlier, and CentOS mirror (8.x) must be used for CentOS 8 or later. If you want to improve download performance when using installation media to install operating systems on multiple hosts, you must modify the Path of the installation medium to point to the closest mirror or a local copy. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Installation Media and click Create Medium . In the Name field, enter a name to represent the installation media entry. In the Path enter the URL that contains the installation tree. You can use following variables in the path to represent multiple different system architectures and versions: USDarch - The system architecture. USDversion - The operating system version. USDmajor - The operating system major version. USDminor - The operating system minor version. Example HTTP path: From the Operating system family list, select the distribution or family of the installation medium. For example, CentOS and Fedora are in the Red Hat family. Click the Organizations and Locations tabs, to change the provisioning context. Satellite Server adds the installation medium to the set provisioning context. Click Submit to save your installation medium. CLI procedure Create the installation medium using the hammer medium create command: 2.9. Creating Partition Tables A partition table is a type of template that defines the way Satellite Server configures the disks available on a new host. A Partition table uses the same ERB syntax as provisioning templates. Red Hat Satellite contains a set of default partition tables to use, including a Kickstart default . You can also edit partition table entries to configure the preferred partitioning scheme, or create a partition table entry and add it to the operating system entry. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Partition Tables and click Create Partition Table . In the Name field, enter a name for the partition table. Select the Default checkbox if you want to set the template to automatically associate with new organizations or locations. Select the Snippet checkbox if you want to identify the template as a reusable snippet for other partition tables. From the Operating System Family list, select the distribution or family of the partitioning layout. For example, Red Hat Enterprise Linux, CentOS, and Fedora are in the Red Hat family. In the Template editor field, enter the layout for the disk partition. For example: You can also use the Template file browser to upload a template file. The format of the layout must match that for the intended operating system. For example, Red Hat Enterprise Linux requires a layout that matches a kickstart file. In the Audit Comment field, add a summary of changes to the partition layout. Click the Organizations and Locations tabs to add any other provisioning contexts that you want to associate with the partition table. Satellite adds the partition table to the current provisioning context. Click Submit to save your partition table. CLI procedure Before you create a partition table with the CLI, create a plain text file that contains the partition layout. This example uses the ~/my-partition file. Create the installation medium using the hammer partition-table create command: 2.10. Dynamic Partition Example Using an Anaconda kickstart template, the following section instructs Anaconda to erase the whole disk, automatically partition, enlarge one partition to maximum size, and then proceed to the sequence of events in the provisioning process: Dynamic partitioning is executed by the installation program. Therefore, you can write your own rules to specify how you want to partition disks according to runtime information from the node, for example, disk sizes, number of drives, vendor, or manufacturer. If you want to provision servers and use dynamic partitioning, add the following example as a template. When the #Dynamic entry is included, the content of the template loads into a %pre shell scriplet and creates a /tmp/diskpart.cfg that is then included into the Kickstart partitioning section. 2.11. Provisioning Templates A provisioning template defines the way Satellite Server installs an operating system on a host. Red Hat Satellite includes many template examples. In the Satellite web UI, navigate to Hosts > Provisioning templates to view them. You can create a template or clone a template and edit the clone. For help with templates, navigate to Hosts > Provisioning templates > Create Template > Help . Templates supported by Red Hat are indicated by a Red Hat icon. To hide unsupported templates, in the Satellite web UI navigate to Administer > Settings . On the Provisioning tab, set the value of Show unsupported provisioning templates to false and click Submit . You can also filter out the supported templates by making the following query "supported = true". If you clone a supported template, the cloned template will be unsupported. Templates accept the Embedded Ruby (ERB) syntax. For more information, see Template Writing Reference in Managing Hosts . You can download provisioning templates. Before you can download the template, you must create a debug certificate. For more information, see Creating an Organization Debug Certificate in the Content Management Guide . You can synchronize templates between Satellite Server and a Git repository or a local directory. For more information, see Synchronizing Templates Repositories in the Managing Hosts guide. To view the history of changes applied to a template, navigate to Hosts > Provisioning templates , select one of the templates, and click History . Click Revert to override the content with the version. You can also revert to an earlier change. Click Show Diff to see information about a specific change: The Template Diff tab displays changes in the body of a provisioning template. The Details tab displays changes in the template description. The History tab displays the user who made a change to the template and date of the change. 2.12. Types of Provisioning Templates There are various types of provisioning templates: Provision The main template for the provisioning process. For example, a kickstart template. For more information about kickstart template syntax, see the Kickstart Syntax Reference in the Red Hat Enterprise Linux 7 Installation Guide . PXELinux, PXEGrub, PXEGrub2 PXE-based templates that deploy to the template Capsule associated with a subnet to ensure that the host uses the installer with the correct kernel options. For BIOS provisioning, select PXELinux template. For UEFI provisioning, select PXEGrub2 . Finish Post-configuration scripts to execute using an SSH connection when the main provisioning process completes. You can use Finish templates only for image-based provisioning in virtual or cloud environments that do not support user_data. Do not confuse an image with a foreman discovery ISO, which is sometimes called a Foreman discovery image. An image in this context is an install image in a virtualized environment for easy deployment. When a finish script successfully exits with the return code 0 , Red Hat Satellite treats the code as a success and the host exits the build mode. Note that there are a few finish scripts with a build mode that uses a call back HTTP call. These scripts are not used for image-based provisioning, but for post configuration of operating-system installations such as Debian, Ubuntu, and BSD. Red Hat does not support provisioning of operating systems other than Red Hat Enterprise Linux. user_data Post-configuration scripts for providers that accept custom data, also known as seed data. You can use the user_data template to provision virtual machines in cloud or virtualised environments only. This template does not require Satellite to be able to reach the host; the cloud or virtualization platform is responsible for delivering the data to the image. Ensure that the image that you want to provision has the software to read the data installed and set to start during boot. For example, cloud-init , which expects YAML input, or ignition , which expects JSON input. cloud_init Some environments, such as VMWare, either do not support custom data or have their own data format that limits what can be done during customization. In this case, you can configure a cloud-init client with the foreman plug-in, which attempts to download the template directly from Satellite over HTTP or HTTPS. This technique can be used in any environment, preferably virtualized. Ensure that you meet the following requirements to use the cloud_init template: Ensure that the image that you want to provision has the software to read the data installed and set to start during boot. A provisioned host is able to reach Satellite from the IP address that matches the host's provisioning interface IP. Note that cloud-init does not work behind NAT. Bootdisk Templates for PXE-less boot methods. Kernel Execution (kexec) Kernel execution templates for PXE-less boot methods. Note Kernel Execution is a Technology Preview feature. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. Script An arbitrary script not used by default but useful for custom tasks. ZTP Zero Touch Provisioning templates. POAP PowerOn Auto Provisioning templates. iPXE Templates for iPXE or gPXE environments to use instead of PXELinux. 2.13. Creating Provisioning Templates A provisioning template defines the way Satellite Server installs an operating system on a host. Use this procedure to create a new provisioning template. Procedure In the Satellite web UI, navigate to Hosts > Provisioning Templates and click Create Template . In the Name field, enter a name for the provisioning template. Fill in the rest of the fields as required. The Help tab provides information about the template syntax and details the available functions, variables, and methods that can be called on different types of objects within the template. CLI procedure Before you create a template with the CLI, create a plain text file that contains the template. This example uses the ~/my-template file. Create the template using the hammer template create command and specify the type with the --type option: 2.14. Cloning Provisioning Templates A provisioning template defines the way Satellite Server installs an operating system on a host. Use this procedure to clone a template and add your updates to the clone. Procedure In the Satellite web UI, navigate to Hosts > Provisioning Templates and search for the template that you want to use. Click Clone to duplicate the template. In the Name field, enter a name for the provisioning template. Select the Default checkbox to set the template to associate automatically with new organizations or locations. In the Template editor field, enter the body of the provisioning template. You can also use the Template file browser to upload a template file. In the Audit Comment field, enter a summary of changes to the provisioning template for auditing purposes. Click the Type tab and if your template is a snippet, select the Snippet checkbox. A snippet is not a standalone provisioning template, but a part of a provisioning template that can be inserted into other provisioning templates. From the Type list, select the type of the template. For example, Provisioning template . Click the Association tab and from the Applicable Operating Systems list, select the names of the operating systems that you want to associate with the provisioning template. Optionally, click Add combination and select a host group from the Host Group list or an environment from the Environment list to associate provisioning template with the host groups and environments. Click the Organizations and Locations tabs to add any additional contexts to the template. Click Submit to save your provisioning template. 2.15. Creating Compute Profiles You can use compute profiles to predefine virtual machine hardware details such as CPUs, memory, and storage. A default installation of Red Hat Satellite contains three predefined profiles: 1-Small 2-Medium 3-Large You can apply compute profiles to all supported compute resources: Section 1.2, "Supported Cloud Providers" Section 1.3, "Supported Virtualization Infrastructure" Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles and click Create Compute Profile . In the Name field, enter a name for the profile. Click Submit . A new window opens with the name of the compute profile. In the new window, click the name of each compute resource and edit the attributes you want to set for this compute profile. 2.16. Setting a Default Encrypted Root Password for Hosts If you do not want to set a plain text default root password for the hosts that you provision, you can use a default encrypted password. Procedure Generate an encrypted password: Copy the password for later use. In the Satellite web UI, navigate to Administer > Settings . On the Settings page, select the Provisioning tab. In the Name column, navigate to Root password , and click Click to edit . Paste the encrypted password, and click Save . 2.17. Using noVNC to Access Virtual Machines You can use your browser to access the VNC console of VMs created by Satellite. Satellite supports using noVNC on the following virtualization platforms: VMware Libvirt Red Hat Virtualization Prerequisites You must have a virtual machine created by Satellite. For existing virtual machines, ensure that the Display type in the Compute Resource settings is VNC . You must import the Katello root CA certificate into your Satellite Server. Adding a security exception in the browser is not enough for using noVNC. For more information, see the Installing the Katello Root CA Certificate section in the Administering Red Hat Satellite guide. Procedure On the VM host system, configure the firewall to allow VNC service on ports 5900 to 5930: On Red Hat Enterprise Linux 6: On Red Hat Enterprise Linux 7: In the Satellite web UI, navigate to Infrastructure > Compute Resources and select the name of a compute resource. In the Virtual Machines tab, select the name of a VM host. Ensure the machine is powered on and then select Console .
[ "hammer host list --organization \"Default_Organization\" --location \"Default_Location\"", "hammer os create --name \" MyOS \" --description \" My_custom_operating_system \" --major 7 --minor 3 --family \"Redhat\" --architectures \"x86_64\" --partition-tables \" My_Partition \" --media \" Red_Hat \" --provisioning-templates \" My_Provisioning_Template \"", "PARTID=USD(hammer --csv partition-table list | grep \"Kickstart default,\" | cut -d, -f1) PXEID=USD(hammer --csv template list --per-page=1000 | grep \"Kickstart default PXELinux\" | cut -d, -f1) SATID=USD(hammer --csv template list --per-page=1000 | grep \"provision\" | grep \",Kickstart default\" | cut -d, -f1) for i in USD(hammer --no-headers --csv os list | awk -F, {'print USD1'}) do hammer partition-table add-operatingsystem --id=\"USD{PARTID}\" --operatingsystem-id=\"USD{i}\" hammer template add-operatingsystem --id=\"USD{PXEID}\" --operatingsystem-id=\"USD{i}\" hammer os set-default-template --id=\"USD{i}\" --config-template-id=USD{PXEID} hammer os add-config-template --id=\"USD{i}\" --config-template-id=USD{SATID} hammer os set-default-template --id=\"USD{i}\" --config-template-id=USD{SATID} done", "hammer os info --id 1", "hammer architecture create --name \" My_Architecture \" --operatingsystems \" My_Operating_System \"", "hammer model create --hardware-model \" My_Hardware_Model \" --info \" My_Description \" --name \" My_Hardware_Model_Name \" --vendor-class \" My_Vendor_Class \"", "hammer medium list --organization \" My_Organization \"", "http://download.example.com/centos/USDversion/Server/USDarch/os/", "hammer medium create --locations \" My_Location \" --name \" My_OS \" --organizations \" My_Organization \" --os-family \"Redhat\" --path \"http://download.example.com/centos/USDversion/Server/USDarch/os/\"", "zerombr clearpart --all --initlabel autopart", "hammer partition-table create --file \" path/to/my_partition_table \" --locations \" My_Location \" --name \"My Partition Table\" --organizations \" My_Organization \" --os-family \"My_OS_Family\" --snippet false", "zerombr clearpart --all --initlabel autopart <%= host_param('autopart_options') %>", "#Dynamic (do not remove this line) MEMORY=USD((`grep MemTotal: /proc/meminfo | sed 's/^MemTotal: *//'|sed 's/ .*//'` / 1024)) if [ \"USDMEMORY\" -lt 2048 ]; then SWAP_MEMORY=USD((USDMEMORY * 2)) elif [ \"USDMEMORY\" -lt 8192 ]; then SWAP_MEMORY=USDMEMORY elif [ \"USDMEMORY\" -lt 65536 ]; then SWAP_MEMORY=USD((USDMEMORY / 2)) else SWAP_MEMORY=32768 fi cat <<EOF > /tmp/diskpart.cfg zerombr yes clearpart --all --initlabel part /boot --fstype ext4 --size 200 --asprimary part swap --size \"USDSWAP_MEMORY\" part / --fstype ext4 --size 1024 --grow EOF", "hammer template create --name \"My Provisioning Template\" --file ~/my-template --type provision --organizations \" My_Organization \" --locations \" My_Location \"", "python -c 'import crypt,getpass;pw=getpass.getpass(); print(crypt.crypt(pw)) if (pw==getpass.getpass(\"Confirm: \")) else exit()'", "iptables -A INPUT -p tcp --dport 5900:5930 -j ACCEPT service iptables save", "firewall-cmd --add-port=5900-5930/tcp firewall-cmd --add-port=5900-5930/tcp --permanent" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/configuring_provisioning_resources_provisioning
7.4. Configuring the Red Hat Support Tool
7.4. Configuring the Red Hat Support Tool When in interactive mode, the configuration options can be listed by entering the command config --help : Procedure 7.1. Registering the Red Hat Support Tool Using Interactive Mode To register the Red Hat Support Tool to the customer portal using interactive mode, proceed as follows: Start the tool by entering the following command: Enter your Red Hat Customer Portal user name: To save your user name to the global configuration file, add the -g option. Enter your Red Hat Customer Portal password: 7.4.1. Saving Settings to the Configuration Files The Red Hat Support Tool , unless otherwise directed, stores values and options locally in the home directory of the current user, using the ~/.redhat-support-tool/redhat-support-tool.conf configuration file. If required, it is recommended to save passwords to this file because it is only readable by that particular user. When the tool starts, it will read values from the global configuration file /etc/redhat-support-tool.conf and from the local configuration file. Locally stored values and options take precedence over globally stored settings. Warning It is recommended not to save passwords in the global /etc/redhat-support-tool.conf configuration file because the password is just base64 encoded and can easily be decoded. In addition, the file is world readable. To save a value or option to the global configuration file, add the -g, --global option as follows: Note In order to be able to save settings globally, using the -g, --global option, the Red Hat Support Tool must be run as root because normal users do not have the permissions required to write to /etc/redhat-support-tool.conf . To remove a value or option from the local configuration file, add the -u, --unset option as follows: This will clear, unset, the parameter from the tool and fall back to the equivalent setting in the global configuration file, if available. Note When running as an unprivileged user, values stored in the global configuration file cannot be removed using the -u, --unset option, but they can be cleared, unset, from the current running instance of the tool by using the -g, --global option simultaneously with the -u, --unset option. If running as root , values and options can be removed from the global configuration file using -g, --global simultaneously with the -u, --unset option.
[ "~]# redhat-support-tool Welcome to the Red Hat Support Tool. Command (? for help): config --help Usage: config [options] config.option <new option value> Use the 'config' command to set or get configuration file values. Options: -h, --help show this help message and exit -g, --global Save configuration option in /etc/redhat-support-tool.conf. -u, --unset Unset configuration option. The configuration file options which can be set are: user : The Red Hat Customer Portal user. password : The Red Hat Customer Portal password. debug : CRITICAL, ERROR, WARNING, INFO, or DEBUG url : The support services URL. Default=https://api.access.redhat.com proxy_url : A proxy server URL. proxy_user: A proxy server user. proxy_password: A password for the proxy server user. ssl_ca : Path to certificate authorities to trust during communication. kern_debug_dir: Path to the directory where kernel debug symbols should be downloaded and cached. Default=/var/lib/redhat-support-tool/debugkernels Examples: - config user - config user my-rhn-username - config --unset user", "~]# redhat-support-tool", "Command (? for help): config user username", "Command (? for help): config password Please enter the password for username :", "Command (? for help): config setting -g value", "Command (? for help): config setting -u value" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-configuring_the_red_hat_support_tool
B.107. xorg-x11-server
B.107. xorg-x11-server B.107.1. RHBA-2011:0340 - xorg-x11-server bug fix update Updated xorg-x11-server packages that fix a bug are now available for Red Hat Enterprise Linux 6. X.Org X11 is an open source implementation of the X Window System. It provides the basic low level functionality upon which full fledged graphical user interfaces such as GNOME and KDE are designed. Bug Fix BZ# 668514 Prior to this update, when the X Window System was unable to detect a monitor and obtain valid extended display identification data (EDID), it set the default resolution limit to 800x600. Consequent to this, users of the "mga" driver for Matrox video cards were unable to select a screen resolution higher than 800x600. This update increases the default limit to 1024x768, allowing users of Matrox video cards to select this resolution as expected. All users of xorg-x11-server are advised to upgrade to these updated packages, which resolve this issue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/xorg-x11-server
Chapter 4. Deploying Apicurio Registry storage in a PostgreSQL database
Chapter 4. Deploying Apicurio Registry storage in a PostgreSQL database This chapter explains how to install, configure, and manage Apicurio Registry data storage in a PostgreSQL database. Section 4.1, "Installing a PostgreSQL database from the OpenShift OperatorHub" Section 4.2, "Configuring Apicurio Registry with PostgreSQL database storage on OpenShift" Section 4.3, "Backing up Apicurio Registry PostgreSQL storage" Section 4.4, "Restoring Apicurio Registry PostgreSQL storage" Prerequisites Chapter 2, Installing Apicurio Registry on OpenShift 4.1. Installing a PostgreSQL database from the OpenShift OperatorHub If you do not already have a PostgreSQL database Operator installed, you can install a PostgreSQL Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators. For more details, see Understanding OperatorHub . Prerequisites You must have cluster administrator access to an OpenShift cluster. Procedure In the OpenShift Container Platform web console, log in using an account with cluster administrator privileges. Change to the OpenShift project in which you want to install the PostgreSQL Operator. For example, from the Project drop-down, select my-project . In the left navigation menu, click Operators and then OperatorHub . In the Filter by keyword text box, enter PostgreSQL to find an Operator suitable for your environment, for example, Crunchy PostgreSQL for OpenShift . Read the information about the Operator, and click Install to display the Operator subscription page. Select your subscription settings, for example: Update Channel : stable Installation Mode : A specific namespace on the cluster and then my-project Approval Strategy : Select Automatic or Manual Click Install , and wait a few moments until the Operator is ready for use. Important You must read the documentation from your chosen PostgreSQL Operator for details on how to create and manage your database. Additional resources Adding Operators to an OpenShift cluster Crunchy PostgreSQL Operator QuickStart 4.2. Configuring Apicurio Registry with PostgreSQL database storage on OpenShift This section explains how to configure storage for Apicurio Registry on OpenShift using a PostgreSQL database Operator. You can install Apicurio Registry in an existing database or create a new database, depending on your environment. This section shows a simple example using the PostgreSQL Operator by Dev4Ddevs.com. Prerequisites You must have an OpenShift cluster with cluster administrator access. You must have already installed Apicurio Registry. See Chapter 2, Installing Apicurio Registry on OpenShift . You must have already installed a PostgreSQL Operator on OpenShift. For example, see Section 4.1, "Installing a PostgreSQL database from the OpenShift OperatorHub" . Procedure In the OpenShift Container Platform web console, log in using an account with cluster administrator privileges. Change to the OpenShift project in which Apicurio Registry and your PostgreSQL Operator are installed. For example, from the Project drop-down, select my-project . Create a PostgreSQL database for your Apicurio Registry storage. For example, click Installed Operators , PostgreSQL Operator by Dev4Ddevs.com , and then Create database . Click YAML and edit the database settings as follows: name : Change the value to registry image : Change the value to centos/postgresql-12-centos7 Edit any other database settings as needed depending on your environment, for example: apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: registry namespace: my-project spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: example databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: postgres databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: postgres databaseUserKeyEnvVar: POSTGRESQL_USER image: centos/postgresql-12-centos7 size: 1 Click Create , and wait until the database is created. Click Installed Operators > Red Hat Integration - Service Registry > ApicurioRegistry > Create ApicurioRegistry . Paste in the following custom resource definition, and edit the values for the database url and credentials to match your environment: apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-sql spec: configuration: persistence: 'sql' sql: dataSource: url: 'jdbc:postgresql://<service name>.<namespace>.svc:5432/<database name>' # e.g. url: 'jdbc:postgresql://acid-minimal-cluster.my-project.svc:5432/registry' userName: 'postgres' password: '<password>' # Optional Click Create and wait for the Apicurio Registry route to be created on OpenShift. Click Networking > Route to access the new route for the Apicurio Registry web console. For example: Additional resources Crunchy PostgreSQL Operator QuickStart Apicurio Registry Operator QuickStart 4.3. Backing up Apicurio Registry PostgreSQL storage When using storage in a PostgreSQL database, you must ensure that the data stored by Apicurio Registry is backed up regularly. SQL Dump is a simple procedure that works with any PostgreSQL installation. This uses the pg_dump utility to generate a file with SQL commands that you can use to recreate the database in the same state that it was in at the time of the dump. pg_dump is a regular PostgreSQL client application, which you can execute from any remote host that has access to the database. Like any other client, the operations that can perform are limited to the user permissions. Procedure Use the pg_dump command to redirect the output to a file: USD pg_dump dbname > dumpfile You can specify the database server that pg_dump connects to using the -h host and -p port options. You can reduce large dump files using a compression tool, such as gzip, for example: USD pg_dump dbname | gzip > filename.gz Additional resources For details on client authentication, see the PostgreSQL documentation . For details on importing and exporting registry content, see Managing Apicurio Registry content using the REST API . 4.4. Restoring Apicurio Registry PostgreSQL storage You can restore SQL Dump files created by pg_dump using the psql utility. Prerequisites You must have already backed up your PostgreSQL datbase using pg_dump . See Section 4.3, "Backing up Apicurio Registry PostgreSQL storage" . All users who own objects or have permissions on objects in the dumped database must already exist. Procedure Enter the following command to create the database: USD createdb -T template0 dbname Enter the following command to restore the SQL dump USD psql dbname < dumpfile Run ANALYZE on each database so the query optimizer has useful statistics.
[ "apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: registry namespace: my-project spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: example databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: postgres databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: postgres databaseUserKeyEnvVar: POSTGRESQL_USER image: centos/postgresql-12-centos7 size: 1", "apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-sql spec: configuration: persistence: 'sql' sql: dataSource: url: 'jdbc:postgresql://<service name>.<namespace>.svc:5432/<database name>' # e.g. url: 'jdbc:postgresql://acid-minimal-cluster.my-project.svc:5432/registry' userName: 'postgres' password: '<password>' # Optional", "http://example-apicurioregistry-sql.my-project.my-domain-name.com/", "pg_dump dbname > dumpfile", "pg_dump dbname | gzip > filename.gz", "createdb -T template0 dbname", "psql dbname < dumpfile" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apicurio_registry/2.6/html/installing_and_deploying_apicurio_registry_on_openshift/installing-registry-db-storage
Chapter 6. Deleting a compliance report
Chapter 6. Deleting a compliance report Prerequisites Login access to the Red Hat Hybrid Cloud Console. Procedure Navigate to Red Hat Insights > Compliance > Reports. The list of available reports displays. Optional. Use the search filters to search for the report you want to delete. You may filter the reports by policy name, policy type, operating system, or by systems meeting compliance. Click the name of the report you want to delete. The report displays and shows the list of systems included in the report. Click Delete report on the upper right side of the report. The Delete report dialog box appears with the message Deleting a report is permanent and cannot be undone. Click the Delete report button to confirm that you want to delete the report.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_compliance_service_reports/assembly-compl-deleting-reports
Chapter 66. Managing the validity of certificates in IdM
Chapter 66. Managing the validity of certificates in IdM In Identity Management (IdM), you can manage the validity of both already existing certificates and certificates you want to issue in the future, but the methods are different. 66.1. Managing the validity of an existing certificate that was issued by IdM CA In IdM, the following methods of viewing the expiry date of a certificate are available: Viewing the expiry date in IdM WebUI . Viewing the expiry date in the CLI . You can manage the validity of an already existing certificate that was issued by IdM CA in the following ways: Renew a certificate by requesting a new certificate using either the original certificate signing request (CSR) or a new CSR generated from the private key. You can request a new certificate using the following utilities: certmonger You can use certmonger to request a service certificate. Before the certificate is due to expire, certmonger will automatically renew the certificate, thereby ensuring a continuing validity of the service certificate. For details, see Obtaining an IdM certificate for a service using certmonger ; certutil You can use certutil to renew user, host, and service certificates. For details on requesting a user certificate, see Requesting a new user certificate and exporting it to the client ; openssl You can use openssl to renew user, host, and service certificates. Revoke a certificate. For details, see: Revoking certificates with the integrated IdM CAs using IdM WebUI ; Revoking certificates with the integrated IdM CAs using IdM CLI ; Restore a certificate if it has been temporarily revoked. For details, see: Restoring certificates with the integrated IdM CAs using IdM WebUI ; Restoring certificates with the integrated IdM CAs using IdM CLI . 66.2. Managing the validity of future certificates issued by IdM CA To manage the validity of future certificates issued by IdM CA, modify, import, or create a certificate profile. For details, see Creating and managing certificate profiles in Identity Management . 66.3. Viewing the expiry date of a certificate in IdM WebUI You can use IdM WebUI to view the expiry date of all the certificates that have been issued by IdM CA. Prerequisites Ensure that you have obtained the administrator's credentials. Procedure In the Authentication menu, click Certificates > Certificates . Click the serial number of the certificate to open the certificate information page. Figure 66.1. List of Certificates In the certificate information page, locate the Expires On information. 66.4. Viewing the expiry date of a certificate in the CLI You can use the command-line interface (CLI) to view the expiry date of a certificate. Procedure Use the openssl utility to open the file in a human-readable format: 66.5. Revoking certificates with the integrated IdM CAs 66.5.1. Certificate revocation reasons A revoked certificate is invalid and cannot be used for authentication. All revocations are permanent, except for reason 6: Certificate Hold . The default revocation reason is 0: unspecified . Table 66.1. Revocation Reasons ID Reason Explanation 0 Unspecified 1 Key Compromised The key that issued the certificate is no longer trusted. Possible causes: lost token, improperly accessed file. 2 CA Compromised The CA that issued the certificate is no longer trusted. 3 Affiliation Changed Possible causes: * A person has left the company or moved to another department. * A host or service is being retired. 4 Superseded A newer certificate has replaced the current certificate. 5 Cessation of Operation The host or service is being decommissioned. 6 Certificate Hold The certificate is temporarily revoked. You can restore the certificate later. 8 Remove from CRL The certificate is not included in the certificate revocation list (CRL). 9 Privilege Withdrawn The user, host, or service is no longer permitted to use the certificate. 10 Attribute Authority (AA) Compromise The AA certificate is no longer trusted. 66.5.2. Revoking certificates with the integrated IdM CAs using IdM WebUI If you know you have lost the private key for your certificate, you must revoke the certificate to prevent its abuse. Complete this procedure to use the IdM WebUI to revoke a certificate issued by the IdM CA. Procedure Click Authentication > Certificates > Certificates . Click the serial number of the certificate to open the certificate information page. Figure 66.2. List of Certificates In the certificate information page, click Actions Revoke Certificate . Select the reason for revoking and click Revoke . See Certificate revocation reasons for details. 66.5.3. Revoking certificates with the integrated IdM CAs using IdM CLI If you know you have lost the private key for your certificate, you must revoke the certificate to prevent its abuse. Complete this procedure to use the IdM CLI to revoke a certificate issued by the IdM CA. Procedure Use the ipa cert-revoke command, and specify: the certificate serial number the ID number for the revocation reason; see Certificate revocation reasons for details For example, to revoke the certificate with serial number 1032 because of reason 1: Key Compromised , enter: For details on requesting a new certificate, see the following documentation: Requesting a new user certificate and exporting it to the client Obtaining an IdM certificate for a service using certmonger . 66.6. Restoring certificates with the integrated IdM CAs If you have revoked a certificate because of reason 6: Certificate Hold , you can restore it again if the private key for the certificate has not been compromised. To restore a certificate, use one of the following procedures: Restore certificates with the integrated IdM CAs using IdM WebUI ; Restore certificates with the integrated IdM CAs using IdM CLI . 66.6.1. Restoring certificates with the integrated IdM CAs using IdM WebUI Complete this procedure to use the IdM WebUI to restore an IdM certificate that has been revoked because of Reason 6: Certificate Hold . Procedure In the Authentication menu, click Certificates > Certificates . Click the serial number of the certificate to open the certificate information page. Figure 66.3. List of Certificates In the certificate information page, click Actions Restore Certificate . 66.6.2. Restoring certificates with the integrated IdM CAs using IdM CLI Complete this procedure to use the IdM CLI to restore an IdM certificate that has been revoked because of Reason 6: Certificate Hold . Procedure Use the ipa cert-remove-hold command and specify the certificate serial number. For example:
[ "openssl x509 -noout -text -in ca.pem Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha256WithRSAEncryption Issuer: O = IDM.EXAMPLE.COM, CN = Certificate Authority Validity Not Before: Oct 30 19:39:14 2017 GMT Not After : Oct 30 19:39:14 2037 GMT", "ipa cert-revoke 1032 --revocation-reason=1", "ipa cert-remove-hold 1032" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-the-validity-of-certificates-in-idm_configuring-and-managing-idm
Chapter 2. Admin REST API
Chapter 2. Admin REST API Red Hat build of Keycloak comes with a fully functional Admin REST API with all features provided by the Admin Console. To invoke the API you need to obtain an access token with the appropriate permissions. The required permissions are described in the Server Administration Guide . You can obtain a token by enabling authentication for your application using Red Hat build of Keycloak; see the Securing Applications and Services Guide. You can also use direct access grant to obtain an access token. 2.1. Examples of using CURL 2.1.1. Authenticating with a username and password Note The following example assumes that you created the user admin with the password password in the master realm as shown in the Getting Started Guide tutorial. Procedure Obtain an access token for the user in the realm master with username admin and password password : curl \ -d "client_id=admin-cli" \ -d "username=admin" \ -d "password=password" \ -d "grant_type=password" \ "http://localhost:8080/realms/master/protocol/openid-connect/token" Note By default this token expires in 1 minute The result will be a JSON document. Invoke the API you need by extracting the value of the access_token property. Invoke the API by including the value in the Authorization header of requests to the API. The following example shows how to get the details of the master realm: curl \ -H "Authorization: bearer eyJhbGciOiJSUz..." \ "http://localhost:8080/admin/realms/master" 2.1.2. Authenticating with a service account To authenticate against the Admin REST API using a client_id and a client_secret , perform this procedure. Procedure Make sure the client is configured as follows: client_id is a confidential client that belongs to the realm master client_id has Service Accounts Enabled option enabled client_id has a custom "Audience" mapper Included Client Audience: security-admin-console Check that client_id has the role 'admin' assigned in the "Service Account Roles" tab. curl \ -d "client_id=<YOUR_CLIENT_ID>" \ -d "client_secret=<YOUR_CLIENT_SECRET>" \ -d "grant_type=client_credentials" \ "http://localhost:8080/realms/master/protocol/openid-connect/token" 2.2. Additional resources Server Administration Guide API Documentation
[ "curl -d \"client_id=admin-cli\" -d \"username=admin\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\"", "curl -H \"Authorization: bearer eyJhbGciOiJSUz...\" \"http://localhost:8080/admin/realms/master\"", "curl -d \"client_id=<YOUR_CLIENT_ID>\" -d \"client_secret=<YOUR_CLIENT_SECRET>\" -d \"grant_type=client_credentials\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_developer_guide/admin_rest_api
Chapter 12. Distributed tracing
Chapter 12. Distributed tracing 12.1. Enabling distributed tracing The client offers distributed tracing based on the Jaeger implementation of the OpenTracing standard. Use the following steps to enable tracing in your application: Install the tracing dependencies. Red Hat Enterprise Linux 7 USD sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm USD sudo yum install python2-pip USD pip install --user --upgrade setuptools USD pip install --user opentracing jaeger-client Red Hat Enterprise Linux 8 USD sudo dnf install python3-pip USD pip3 install --user opentracing jaeger-client Register the global tracer in your program. Example: Global tracer configuration from proton.tracing import init_tracer tracer = init_tracer(" <service-name> ") For more information about Jaeger configuration, see Jaeger Sampling . When testing or debugging, you may want to force Jaeger to trace a particular operation. See the Jaeger Python client documentation for more information. To view the traces your application captures, use the Jaeger Getting Started to run the Jaeger infrastructure and console.
[ "sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm sudo yum install python2-pip pip install --user --upgrade setuptools pip install --user opentracing jaeger-client", "sudo dnf install python3-pip pip3 install --user opentracing jaeger-client", "from proton.tracing import init_tracer tracer = init_tracer(\" <service-name> \")" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_python_client/distributed_tracing
Chapter 7. Enabling accelerators
Chapter 7. Enabling accelerators 7.1. Enabling NVIDIA GPUs Before you can use NVIDIA GPUs in OpenShift AI, you must install the NVIDIA GPU Operator. Important The NVIDIA GPU add-on is no longer supported. Instead, enable GPUs by installing the NVIDIA GPU Operator. If your deployment has a previously-installed NVIDIA GPU add-on, before you install the NVIDIA GPU Operator, use Red Hat OpenShift Cluster Manager to uninstall the NVIDIA GPU add-on from your cluster. Prerequisites You have logged in to your OpenShift cluster. You have the cluster-admin role in your OpenShift cluster. You have installed an NVIDIA GPU and confirmed that it is detected in your environment. Procedure To enable GPU support on an OpenShift cluster, follow the instructions here: NVIDIA GPU Operator on Red Hat OpenShift Container Platform in the NVIDIA documentation. Important After you install the Node Feature Discovery (NFD) Operator, you must create an instance of NodeFeatureDiscovery. In addition, after you install the NVIDIA GPU Operator, you must create a ClusterPolicy and populate it with default values. Delete the migration-gpu-status ConfigMap. In the OpenShift web console, switch to the Administrator perspective. Set the Project to All Projects or redhat-ods-applications to ensure you can see the appropriate ConfigMap. Search for the migration-gpu-status ConfigMap. Click the action menu (...) and select Delete ConfigMap from the list. The Delete ConfigMap dialog appears. Inspect the dialog and confirm that you are deleting the correct ConfigMap. Click Delete . Restart the dashboard replicaset. In the OpenShift web console, switch to the Administrator perspective. Click Workloads Deployments . Set the Project to All Projects or redhat-ods-applications to ensure you can see the appropriate deployment. Search for the rhods-dashboard deployment. Click the action menu (...) and select Restart Rollout from the list. Wait until the Status column indicates that all pods in the rollout have fully restarted. Verification The reset migration-gpu-status instance is present on the Instances tab on the AcceleratorProfile custom resource definition (CRD) details page. From the Administrator perspective, go to the Operators Installed Operators page. Confirm that the following Operators appear: NVIDIA GPU Node Feature Discovery (NFD) Kernel Module Management (KMM) The GPU is correctly detected a few minutes after full installation of the Node Feature Discovery (NFD) and NVIDIA GPU Operators. The OpenShift command line interface (CLI) displays the appropriate output for the GPU worker node. For example: Note In OpenShift AI, Red Hat supports the use of accelerators within the same cluster only. Red Hat does not support remote direct memory access (RDMA) between accelerators, or the use of accelerators across a network, for example, by using technology such as NVIDIA GPUDirect or NVLink. After installing the NVIDIA GPU Operator, create an accelerator profile as described in Working with accelerator profiles . 7.2. Intel Gaudi AI Accelerator integration To accelerate your high-performance deep learning models, you can integrate Intel Gaudi AI accelerators into OpenShift AI. This integration enables your data scientists to use Gaudi libraries and software associated with Intel Gaudi AI accelerators through custom-configured workbench instances. Intel Gaudi AI accelerators offer optimized performance for deep learning workloads, with the latest Gaudi 3 devices providing significant improvements in training speed and energy efficiency. These accelerators are suitable for enterprises running machine learning and AI applications on OpenShift AI. Before you can enable Intel Gaudi AI accelerators in OpenShift AI, you must complete the following steps: Install the latest version of the Intel Gaudi AI Accelerator Operator from OperatorHub. Create and configure a custom workbench image for Intel Gaudi AI accelerators. A prebuilt workbench image for Gaudi accelerators is not included in OpenShift AI. Manually define and configure an accelerator profile for each Intel Gaudi AI device in your environment. OpenShift AI supports Intel Gaudi devices up to Intel Gaudi 3. The Intel Gaudi 3 accelerators, in particular, offer the following benefits: Improved training throughput: Reduce the time required to train large models by using advanced tensor processing cores and increased memory bandwidth. Energy efficiency: Lower power consumption while maintaining high performance, reducing operational costs for large-scale deployments. Scalable architecture: Scale across multiple nodes for distributed training configurations. Your OpenShift platform must support EC2 DL1 instances to use Intel Gaudi AI accelerators in an Amazon EC2 DL1 instance. You can use Intel Gaudi AI accelerators in workbench instances or model serving after you enable the accelerators, create a custom workbench image, and configure the accelerator profile. To identify the Intel Gaudi AI accelerators present in your deployment, use the lspci utility. For more information, see lspci(8) - Linux man page . Important The presence of Intel Gaudi AI accelerators in your deployment, as indicated by the lspci utility, does not guarantee that the devices are ready to use. You must ensure that all installation and configuration steps are completed successfully. Additional resources lspci(8) - Linux man page Amazon EC2 DL1 Instances Intel Gaudi AI Operator OpenShift installation What version of the Kubernetes API is included with each OpenShift 4.x release? 7.2.1. Enabling Intel Gaudi AI accelerators Before you can use Intel Gaudi AI accelerators in OpenShift AI, you must install the required dependencies, deploy the Intel Gaudi AI Accelerator Operator, and configure the environment. Prerequisites You have logged in to OpenShift. You have the cluster-admin role in OpenShift. You have installed your Intel Gaudi accelerator and confirmed that it is detected in your environment. Your OpenShift environment supports EC2 DL1 instances if you are running on Amazon Web Services (AWS). You have installed the OpenShift command-line interface (CLI). Procedure Install the latest version of the Intel Gaudi AI Accelerator Operator, as described in Intel Gaudi AI Operator OpenShift installation . By default, OpenShift sets a per-pod PID limit of 4096. If your workload requires more processing power, such as when you use multiple Gaudi accelerators or when using vLLM with Ray, you must manually increase the per-pod PID limit to avoid Resource temporarily unavailable errors. These errors occur due to PID exhaustion. Red Hat recommends setting this limit to 32768, although values over 20000 are sufficient. Run the following command to label the node: Optional: To prevent workload distribution on the affected node, you can mark the node as unschedulable and then drain it in preparation for maintenance. For more information, see Understanding how to evacuate pods on nodes . Create a custom-kubelet-pidslimit.yaml KubeletConfig resource file: Populate the file with the following YAML code. Set the PodPidsLimit value to 32768: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet-pidslimit spec: kubeletConfig: PodPidsLimit: 32768 machineConfigPoolSelector: matchLabels: custom-kubelet: set-pod-pid-limit-kubelet Apply the configuration: This operation causes the node to reboot. For more information, see Understanding node rebooting . Optional: If you previously marked the node as unschedulable, you can allow scheduling again after the node reboots. Create a custom workbench image for Intel Gaudi AI accelerators, as described in Creating custom workbench images . After installing the Intel Gaudi AI Accelerator Operator, create an accelerator profile, as described in Working with accelerator profiles . Verification From the Administrator perspective, go to the Operators Installed Operators page. Confirm that the following Operators appear: Intel Gaudi AI Accelerator Node Feature Discovery (NFD) Kernel Module Management (KMM) 7.3. AMD GPU Integration You can use AMD GPUs with OpenShift AI to accelerate AI and machine learning (ML) workloads. AMD GPUs provide high-performance compute capabilities, allowing users to process large data sets, train deep neural networks, and perform complex inference tasks more efficiently. Integrating AMD GPUs with OpenShift AI involves the following components: ROCm workbench images : Use the ROCm workbench images to streamline AI/ML workflows on AMD GPUs. These images include libraries and frameworks optimized with the AMD ROCm platform, enabling high-performance workloads for PyTorch and TensorFlow. The pre-configured images reduce setup time and provide an optimized environment for GPU-accelerated development and experimentation. AMD GPU Operator : The AMD GPU Operator simplifies GPU integration by automating driver installation, device plugin setup, and node labeling for GPU resource management. It ensures compatibility between OpenShift and AMD hardware while enabling scaling of GPU-enabled workloads. 7.3.1. Verifying AMD GPU availability on your cluster Before you proceed with the AMD GPU Operator installation process, you can verify the presence of an AMD GPU device on a node within your OpenShift cluster. You can use commands such as lspci or oc to confirm hardware and resource availability. Prerequisites You have administrative access to the OpenShift cluster. You have a running OpenShift cluster with a node equipped with an AMD GPU. You have access to the OpenShift CLI ( oc ) and terminal access to the node. Procedure Use the OpenShift CLI to verify if GPU resources are allocatable: List all nodes in the cluster to identify the node with an AMD GPU: Note the name of the node where you expect the AMD GPU to be present. Describe the node to check its resource allocation: In the output, locate the Capacity and Allocatable sections and confirm that amd.com/gpu is listed. For example: Check for the AMD GPU device using the lspci command: Log in to the node: Run the lspci command and search for the supported AMD device in your deployment. For example: Verify that the output includes one of the AMD GPU models. For example: Optional: Use the rocminfo command if the ROCm stack is installed on the node: Confirm that the ROCm tool outputs details about the AMD GPU, such as compute units, memory, and driver status. Verification The oc describe node <node_name> command lists amd.com/gpu under Capacity and Allocatable . The lspci command output identifies an AMD GPU as a PCI device matching one of the specified models (for example, MI210, MI250, MI300). Optional: The rocminfo tool provides detailed GPU information, confirming driver and hardware configuration. Additional resources AMD GPU Operator GitHub Repository 7.3.2. Enabling AMD GPUs Before you can use AMD GPUs in OpenShift AI, you must install the required dependencies, deploy the AMD GPU Operator, and configure the environment. Prerequisites You have logged in to OpenShift. You have the cluster-admin role in OpenShift. You have installed your AMD GPU and confirmed that it is detected in your environment. Your OpenShift environment supports EC2 DL1 instances if you are running on Amazon Web Services (AWS). Procedure Install the latest version of the AMD GPU Operator, as described in Install AMD GPU Operator on OpenShift . After installing the AMD GPU Operator, configure the AMD drivers required by the Operator as described in the documentation: Configure AMD drivers for the GPU Operator . Note Alternatively, you can install the AMD GPU Operator from the Red Hat Catalog. For more information, see Install AMD GPU Operator from Red Hat Catalog . After installing the AMD GPU Operator, create an accelerator profile, as described in Working with accelerator profiles . Verification From the Administrator perspective, go to the Operators Installed Operators page. Confirm that the following Operators appear: AMD GPU Operator Node Feature Discovery (NFD) Kernel Module Management (KMM) Note Ensure that you follow all the steps for proper driver installation and configuration. Incorrect installation or configuration may prevent the AMD GPUs from being recognized or functioning properly.
[ "Expected output when the GPU is detected properly describe node <node name> Capacity: cpu: 4 ephemeral-storage: 313981932Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16076568Ki nvidia.com/gpu: 1 pods: 250 Allocatable: cpu: 3920m ephemeral-storage: 288292006229 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 12828440Ki nvidia.com/gpu: 1 pods: 250", "label node <node_name> custom-kubelet=set-pod-pid-limit-kubelet", "create -f custom-kubelet-pidslimit.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet-pidslimit spec: kubeletConfig: PodPidsLimit: 32768 machineConfigPoolSelector: matchLabels: custom-kubelet: set-pod-pid-limit-kubelet", "apply -f custom-kubelet-pidslimit.yaml", "get nodes", "describe node <node_name>", "Capacity: amd.com/gpu: 1 Allocatable: amd.com/gpu: 1", "debug node/<node_name> chroot /host", "lspci | grep -E \"MI210|MI250|MI300\"", "03:00.0 Display controller: Advanced Micro Devices, Inc. [AMD] Instinct MI210", "rocminfo" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_openshift_ai/enabling_accelerators
Chapter 5. Setting up PCP
Chapter 5. Setting up PCP Performance Co-Pilot (PCP) is a suite of tools, services, and libraries for monitoring, visualizing, storing, and analyzing system-level performance measurements. 5.1. Overview of PCP You can add performance metrics using Python, Perl, C++, and C interfaces. Analysis tools can use the Python, C++, C client APIs directly, and rich web applications can explore all available performance data using a JSON interface. You can analyze data patterns by comparing live results with archived data. Features of PCP: Light-weight distributed architecture, which is useful during the centralized analysis of complex systems. It allows the monitoring and management of real-time data. It allows logging and retrieval of historical data. PCP has the following components: The Performance Metric Collector Daemon ( pmcd ) collects performance data from the installed Performance Metric Domain Agents ( pmda ). PMDAs can be individually loaded or unloaded on the system and are controlled by the PMCD on the same host. Various client tools, such as pminfo or pmstat , can retrieve, display, archive, and process this data on the same host or over the network. The pcp package provides the command-line tools and underlying functionality. The pcp-gui package provides the graphical application. Install the pcp-gui package by executing the dnf install pcp-gui command. For more information, see Visually tracing PCP log archives with the PCP Charts application . Additional resources pcp(1) man page on your system /usr/share/doc/pcp-doc/ directory System services and tools distributed with PCP Index of Performance Co-Pilot (PCP) articles, solutions, tutorials, and white papers fromon Red Hat Customer Portal Side-by-side comparison of PCP tools with legacy tools Red Hat Knowledgebase article PCP upstream documentation 5.2. Installing and enabling PCP To begin using PCP, install all the required packages and enable the PCP monitoring services. This procedure describes how to install PCP using the pcp package. If you want to automate the PCP installation, install it using the pcp-zeroconf package. For more information about installing PCP by using pcp-zeroconf , see Setting up PCP with pcp-zeroconf . Procedure Install the pcp package: Enable and start the pmcd service on the host machine: Verification Verify if the pmcd process is running on the host: Additional resources pmcd(1) man page on your system System services and tools distributed with PCP 5.3. Deploying a minimal PCP setup The minimal PCP setup collects performance statistics on Red Hat Enterprise Linux. The setup involves adding the minimum number of packages on a production system needed to gather data for further analysis. You can analyze the resulting tar.gz file and the archive of the pmlogger output using various PCP tools and compare them with other sources of performance information. Prerequisites PCP is installed. For more information, see Installing and enabling PCP . Procedure Update the pmlogger configuration: Start the pmcd and pmlogger services: Execute the required operations to record the performance data. Stop the pmcd and pmlogger services: Save the output and save it to a tar.gz file named based on the host name and the current date and time: Extract this file and analyze the data using PCP tools. Additional resources pmlogconf(1) , pmlogger(1) , and pmcd(1) man pages on your system System services and tools distributed with PCP 5.4. System services and tools distributed with PCP Performance Co-Pilot (PCP) includes various system services and tools you can use for measuring performance. The basic package pcp includes the system services and basic tools. Additional tools are provided with the pcp-system-tools , pcp-gui , and pcp-devel packages. Roles of system services distributed with PCP pmcd The Performance Metric Collector Daemon (PMCD). pmie The Performance Metrics Inference Engine. pmlogger The performance metrics logger. pmproxy The realtime and historical performance metrics proxy, time series query and REST API service. Tools distributed with base PCP package pcp Displays the current status of a Performance Co-Pilot installation. pcp-vmstat Provides a high-level system performance overview every 5 seconds. Displays information about processes, memory, paging, block IO, traps, and CPU activity. pmconfig Displays the values of configuration parameters. pmdiff Compares the average values for every metric in either one or two archives, in a given time window, for changes that are likely to be of interest when searching for performance regressions. pmdumplog Displays control, metadata, index, and state information from a Performance Co-Pilot archive file. pmfind Finds PCP services on the network. pmie An inference engine that periodically evaluates a set of arithmetic, logical, and rule expressions. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file. pmieconf Displays or sets configurable pmie variables. pmiectl Manages non-primary instances of pmie . pminfo Displays information about performance metrics. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file. pmlc Interactively configures active pmlogger instances. pmlogcheck Identifies invalid data in a Performance Co-Pilot archive file. pmlogconf Creates and modifies a pmlogger configuration file. pmlogctl Manages non-primary instances of pmlogger . pmloglabel Verifies, modifies, or repairs the label of a Performance Co-Pilot archive file. pmlogredact Removes sensitive information from PCP archives. pmlogsummary Calculates statistical information about performance metrics stored in a Performance Co-Pilot archive file. pmprobe Determines the availability of performance metrics. pmsocks Allows access to a Performance Co-Pilot hosts through a firewall. pmstat Periodically displays a brief summary of system performance. pmstore Modifies the values of performance metrics. pmtrace Provides a command-line interface to the trace PMDA. pmval Displays the current value of a performance metric. Tools distributed with the separately installed pcp-system-tools package pcp-atop Shows the system-level occupation of the most critical hardware resources from the performance point of view: CPU, memory, disk, and network. pcp-atopsar Generates a system-level activity report over a variety of system resource utilization. The report is generated from a raw logfile previously recorded using pmlogger or the -w option of pcp-atop . pcp-buddyinfo Reports statistics for the buddy algorithm. pcp-dmcache Displays information about configured Device Mapper Cache targets, such as: device IOPs, cache and metadata device utilization, as well as hit and miss rates and ratios for both reads and writes for each cache device. pcp-dstat Displays metrics of one system at a time. To display metrics of multiple systems, use --host option. pcp-free Reports on free and used memory in a system. pcp-htop Displays all processes running on a system along with their command line arguments in a manner similar to the top command, but allows you to scroll vertically and horizontally as well as interact using a mouse. You can also view processes in a tree format and select and act on multiple processes at once. pcp-ipcs Displays information about the inter-process communication (IPC) facilities that the calling process has read access for. pcp-meminfo Reports statistics for the kernel system memory. pcp-mpstat Reports CPU and interrupt-related statistics. pcp-netstat Reports statistics for network protocols and network interfaces. pcp-numastat Displays NUMA allocation statistics from the kernel memory allocator. pcp-pidstat Displays information about individual tasks or processes running on the system, such as CPU percentage, memory and stack usage, scheduling, and priority. Reports live data for the local host by default. pcp-shping Samples and reports on the shell-ping service metrics exported by the pmdashping Performance Metrics Domain Agent (PMDA). pcp-slabinfo Reports statistics for the kernel slab allocator. pcp-ss Displays socket statistics collected by the pmdasockets PMDA. pcp-tapestat Reports I/O statistics for tape devices. pcp-uptime Displays how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes. pcp-zoneinfo Reports statistics related to Non-Uniform Memory Access (NUMA) nodes. pcp-verify Inspects various aspects of a Performance Co-Pilot collector installation and reports on whether it is configured correctly for certain modes of operation. pmiostat Reports I/O statistics for SCSI devices (by default) or device-mapper devices (with the -x device-mapper option). pmrep Reports on selected, easily customizable, performance metrics values. Tools distributed with the separately installed pcp-gui package pmchart Plots performance metrics values available through the facilities of the Performance Co-Pilot. pmdumptext Outputs the values of performance metrics collected live or from a Performance Co-Pilot archive. Tools distributed with the separately installed pcp-devel package pmclient Displays high-level system performance metrics by using the Performance Metrics Application Programming Interface (PMAPI). pmdbg Displays available Performance Co-Pilot debug control flags and their values. pmerr Displays available Performance Co-Pilot error codes and their corresponding error messages. Tool distributed with the separately installed pcp-geolocate package pcp-geolocate Discovers collector system geographical labels and reports the latitude and longitude for the local PCP collector host in JSON format. 5.5. PCP deployment architectures Performance Co-Pilot (PCP) supports multiple deployment architectures, based on the scale of the PCP deployment, and offers many options to accomplish advanced setups. Available scaling deployment setup variants based on the recommended deployment set up by Red Hat, sizing factors, and configuration options include: Localhost Each service runs locally on the monitored machine. When you start a service without any configuration changes, this is the default deployment. Scaling beyond the individual node is not possible in this case. By default, the deployment setup for Redis is standalone, localhost. However, Redis can optionally perform in a highly-available and highly scalable clustered fashion, where data is shared across multiple hosts. Another viable option is to deploy a Redis cluster in the cloud, or to utilize a managed Redis cluster from a cloud vendor. Decentralized The only difference between localhost and decentralized setup is the centralized Redis service. In this model, the host executes pmlogger service on each monitored host and retrieves metrics from a local pmcd instance. A local pmproxy service then exports the performance metrics to a central Redis instance. Figure 5.1. Decentralized logging Centralized logging - pmlogger farm When the resource usage on the monitored hosts is constrained, another deployment option is a pmlogger farm, which is also known as centralized logging. In this setup, a single logger host executes multiple pmlogger processes, and each is configured to retrieve performance metrics from a different remote pmcd host. The centralized logger host is also configured to execute the pmproxy service, which discovers the resulting PCP archives logs and loads the metric data into a Redis instance. Figure 5.2. Centralized logging - pmlogger farm Federated - multiple pmlogger farms For large scale deployments, Red Hat recommends to deploy multiple pmlogger farms in a federated fashion. For example, one pmlogger farm per rack or data center. Each pmlogger farm loads the metrics into a central Redis instance. Figure 5.3. Federated - multiple pmlogger farms Note By default, the deployment setup for Redis is standalone, localhost. However, Redis can optionally perform in a highly-available and highly scalable clustered fashion, where data is shared across multiple hosts. Another viable option is to deploy a Redis cluster in the cloud, or to utilize a managed Redis cluster from a cloud vendor. Additional resources pcp(1) , pmlogger(1) , pmproxy(1) , and pmcd(1) man pages on your system Recommended deployment architecture 5.6. Recommended deployment architecture The following table describes the recommended deployment architectures based on the number of monitored hosts. Table 5.1. Recommended deployment architecture Number of hosts (N) 1-10 10-100 100-1000 pmcd servers N N N pmlogger servers 1 to N N/10 to N N/100 to N pmproxy servers 1 to N 1 to N N/100 to N Redis servers 1 to N 1 to N/10 N/100 to N/10 Redis cluster No Maybe Yes Recommended deployment setup Localhost, Decentralized, or Centralized logging Decentralized, Centralized logging, or Federated Decentralized or Federated 5.7. Sizing factors The following are the sizing factors required for scaling: Remote system size The number of CPUs, disks, network interfaces, and other hardware resources affects the amount of data collected by each pmlogger on the centralized logging host. Logged Metrics The number and types of logged metrics play an important role. In particular, the per-process proc.* metrics require a large amount of disk space, for example, with the standard pcp-zeroconf setup, 10s logging interval, 11 MB without proc metrics versus 155 MB with proc metrics - a factor of 10 times more. Additionally, the number of instances for each metric, for example the number of CPUs, block devices, and network interfaces also impacts the required storage capacity. Logging Interval The interval how often metrics are logged, affects the storage requirements. The expected daily PCP archive file sizes are written to the pmlogger.log file for each pmlogger instance. These values are uncompressed estimates. Since PCP archives compress very well, approximately 10:1, the actual long term disk space requirements can be determined for a particular site. pmlogrewrite After every PCP upgrade, the pmlogrewrite tool is executed and rewrites old archives if there were changes in the metric metadata from the version and the new version of PCP. This process duration scales linear with the number of archives stored. Additional resources pmlogrewrite(1) and pmlogger(1) man pages on your system 5.8. Configuration options for PCP scaling The following are the configuration options, which are required for scaling: sysctl and rlimit settings When archive discovery is enabled, pmproxy requires four descriptors for every pmlogger that it is monitoring or log-tailing, along with the additional file descriptors for the service logs and pmproxy client sockets, if any. Each pmlogger process uses about 20 file descriptors for the remote pmcd socket, archive files, service logs, and others. In total, this can exceed the default 1024 soft limit on a system running around 200 pmlogger processes. The pmproxy service in pcp-5.3.0 and later automatically increases the soft limit to the hard limit. On earlier versions of PCP, tuning is required if a high number of pmlogger processes are to be deployed, and this can be accomplished by increasing the soft or hard limits for pmlogger . For more information, see the Red Hat Knowledgebase solution How to set limits (ulimit) for services run by systemd . Local Archives The pmlogger service stores metrics of local and remote pmcds in the /var/log/pcp/pmlogger/ directory. To control the logging interval of the local system, update the /etc/pcp/pmlogger/control.d/ configfile file and add -t X in the arguments, where X is the logging interval in seconds. To configure which metrics should be logged, execute pmlogconf /var/lib/pcp/config/pmlogger/config. clienthostname . This command deploys a configuration file with a default set of metrics, which can optionally be further customized. To specify retention settings, that is when to purge old PCP archives, update the /etc/sysconfig/pmlogger_timers file and specify PMLOGGER_DAILY_PARAMS="-E -k X " , where X is the amount of days to keep PCP archives. Redis The pmproxy service sends logged metrics from pmlogger to a Redis instance. The following are the available two options to specify the retention settings in the /etc/pcp/pmproxy/pmproxy.conf configuration file: stream.expire specifies the duration when stale metrics should be removed, that is metrics which were not updated in a specified amount of time in seconds. stream.maxlen specifies the maximum number of metric values for one metric per host. This setting should be the retention time divided by the logging interval, for example 20160 for 14 days of retention and 60s logging interval (60*60*24*14/60) Additional resources pmproxy(1) , pmlogger(1) , and sysctl(8) man pages on your system 5.9. Example: Analyzing the centralized logging deployment The following results were gathered on a centralized logging setup, also known as pmlogger farm deployment, with a default pcp-zeroconf 5.3.0 installation, where each remote host is an identical container instance running pmcd on a server with 64 CPU cores, 376 GB RAM, and one disk attached. The logging interval is 10s, proc metrics of remote nodes are not included, and the memory values refer to the Resident Set Size (RSS) value. Table 5.2. Detailed utilization statistics for 10s logging interval Number of Hosts 10 50 PCP Archives Storage per Day 91 MB 522 MB pmlogger Memory 160 MB 580 MB pmlogger Network per Day (In) 2 MB 9 MB pmproxy Memory 1.4 GB 6.3 GB Redis Memory per Day 2.6 GB 12 GB Table 5.3. Used resources depending on monitored hosts for 60s logging interval Number of Hosts 10 50 100 PCP Archives Storage per Day 20 MB 120 MB 271 MB pmlogger Memory 104 MB 524 MB 1049 MB pmlogger Network per Day (In) 0.38 MB 1.75 MB 3.48 MB pmproxy Memory 2.67 GB 5.5GB 9 GB Redis Memory per Day 0.54 GB 2.65 GB 5.3 GB Note The pmproxy queues Redis requests and employs Redis pipelining to speed up Redis queries. This can result in high memory usage. For troubleshooting this issue, see Troubleshooting high memory usage . 5.10. Example: Analyzing the federated setup deployment The following results were observed on a federated setup, also known as multiple pmlogger farms, consisting of three centralized logging ( pmlogger farm) setups, where each pmlogger farm was monitoring 100 remote hosts, that is 300 hosts in total. This setup of the pmlogger farms is identical to the configuration mentioned in the Example: Analyzing the centralized logging deployment for 60s logging interval, except that the Redis servers were operating in cluster mode. Table 5.4. Used resources depending on federated hosts for 60s logging interval PCP Archives Storage per Day pmlogger Memory Network per Day (In/Out) pmproxy Memory Redis Memory per Day 277 MB 1058 MB 15.6 MB / 12.3 MB 6-8 GB 5.5 GB Here, all values are per host. The network bandwidth is higher due to the inter-node communication of the Redis cluster. 5.11. Establishing secure PCP connections You can configure PCP collector and monitoring components to participate in secure PCP protocol exchanges. 5.11.1. Secure PCP connections You can establish secure connections between Performance Co-Pilot (PCP) collector and monitoring components. PCP collector components are the parts of PCP that collect and extract performance data from different sources. PCP monitor components are the parts of PCP that display data collected from hosts or archives that have the PCP collector components installed. Establishing secure connections between these components helps prevent unauthorized parties from accessing or modifying the data being collected and monitored. All connections with the Performance Metrics Collector Daemon ( pmcd ) are made using the TCP/IP based PCP protocol. Protocol proxying and the PCP REST APIs are served by the pmproxy daemon - the REST API can be accessed over HTTPS, ensuring a secure connection. Both the pmcd and pmproxy daemons are capable of simultaneous TLS and non-TLS communications on a single port. The default port for pmcd is 44321 and 44322 for pmproxy . This means that you do not have to choose between TLS or non-TLS communications for your PCP collector systems and can use both at the same time. 5.11.2. Configuring secure connections for PCP collector components All PCP collector systems must have valid certificates in order to participate in secure PCP protocol exchanges. Note the pmproxy daemon operates as both a client and a server from the perspective of TLS. Prerequisites PCP is installed. For more information, see Installing and enabling PCP . The private client key is stored in the /etc/pcp/tls/client.key file. If you use a different path, adapt the corresponding steps of the procedure. For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA's documentation. The TLS client certificate is stored in the /etc/pcp/tls/client.crt file. If you use a different path, adapt the corresponding steps of the procedure. The CA certificate is stored in the /etc/pcp/tls/ca.crt file. If you use a different path, adapt the corresponding steps of the procedure. Additionally, for the pmproxy daemon: The private server key is stored in the /etc/pcp/tls/server.key file. If you use a different path, adapt the corresponding steps of the procedure The TLS server certificate is stored in the /etc/pcp/tls/server.crt file. If you use a different path, adapt the corresponding steps of the procedure. Procedure Update the PCP TLS configuration file on the collector systems to use the CA issued certificates to establish a secure connection: Restart the PCP collector infrastructure: Verification Verify the TLS configuration: On the pmcd service: On the pmproxy service: 5.11.3. Configuring secure connections for PCP monitoring components Configure your PCP monitoring components to participate in secure PCP protocol exchanges. Prerequisites PCP is installed. For more information, see Installing and enabling PCP . The private client key is stored in the ~/.pcp/tls/client.key file. If you use a different path, adapt the corresponding steps of the procedure. For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA's documentation. The TLS client certificate is stored in the ~/.pcp/tls/client.crt file. If you use a different path, adapt the corresponding steps of the procedure. The CA certificate is stored in the /etc/pcp/tls/ca.crt file. If you use a different path, adapt the corresponding steps of the procedure. Procedure Create a TLS configuration file with the following information: Establish the secure connection: Verification Verify the secure connection is configured: 5.12. Troubleshooting high memory usage The following scenarios can result in high memory usage: The pmproxy process is busy processing new PCP archives and does not have spare CPU cycles to process Redis requests and responses. The Redis node or cluster is overloaded and cannot process incoming requests on time. The pmproxy service daemon uses Redis streams and supports the configuration parameters, which are PCP tuning parameters and affects Redis memory usage and key retention. The /etc/pcp/pmproxy/pmproxy.conf file lists the available configuration options for pmproxy and the associated APIs. The following procedure describes how to troubleshoot high memory usage issue. Prerequisites Install the pcp-pmda-redis package: Install the redis PMDA: Procedure To troubleshoot high memory usage, execute the following command and observe the inflight column: This column shows how many Redis requests are in-flight, which means they are queued or sent, and no reply was received so far. A high number indicates one of the following conditions: The pmproxy process is busy processing new PCP archives and does not have spare CPU cycles to process Redis requests and responses. The Redis node or cluster is overloaded and cannot process incoming requests on time. To troubleshoot the high memory usage issue, reduce the number of pmlogger processes for this farm, and add another pmlogger farm. Use the federated - multiple pmlogger farms setup. If the Redis node is using 100% CPU for an extended amount of time, move it to a host with better performance or use a clustered Redis setup instead. To view the pmproxy.redis.* metrics, use the following command: To view how many Redis requests are inflight, see the pmproxy.redis.requests.inflight.total metric and pmproxy.redis.requests.inflight.bytes metric to view how many bytes are occupied by all current inflight Redis requests. In general, the redis request queue would be zero but can build up based on the usage of large pmlogger farms, which limits scalability and can cause high latency for pmproxy clients. Use the pminfo command to view information about performance metrics. For example, to view the redis.* metrics, use the following command: To view the peak memory usage, see the redis.used_memory_peak metric. Additional resources pmdaredis(1) , pmproxy(1) , and pminfo(1) man pages on your system PCP deployment architectures
[ "dnf install pcp", "systemctl enable pmcd systemctl start pmcd", "pcp Performance Co-Pilot configuration on workstation: platform: Linux workstation 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 hardware: 12 cpus, 2 disks, 1 node, 36023MB RAM timezone: CEST-2 services: pmcd pmcd: Version 4.3.0-1, 8 agents pmda: root pmcd proc xfs linux mmv kvm jbd2", "pmlogconf -r /var/lib/pcp/config/pmlogger/config.default", "systemctl start pmcd.service systemctl start pmlogger.service", "systemctl stop pmcd.service systemctl stop pmlogger.service", "cd /var/log/pcp/pmlogger/ tar -czf USD(hostname).USD(date +%F-%Hh%M).pcp.tar.gz USD(hostname)", "cat > /etc/pcp/tls.conf << END tls-ca-cert-file = /etc/pcp/tls/ca.crt tls-key-file = /etc/pcp/tls/server.key tls-cert-file = /etc/pcp/tls/server.crt tls-client-key-file = /etc/pcp/tls/client.key tls-client-cert-file = /etc/pcp/tls/client.crt END", "systemctl restart pmcd.service systemctl restart pmproxy.service", "grep 'Info:' /var/log/pcp/pmcd/pmcd.log [Tue Feb 07 11:47:33] pmcd(6558) Info: OpenSSL 3.0.7 setup", "grep 'Info:' /var/log/pcp/pmproxy/pmproxy.log [Tue Feb 07 11:44:13] pmproxy(6014) Info: OpenSSL 3.0.7 setup", "home= echo ~ cat > ~/.pcp/tls.conf << END tls-ca-cert-file = /etc/pcp/tls/ca.crt tls-key-file = USDhome/.pcp/tls/client.key tls-cert-file = USDhome/.pcp/tls/client.crt END", "export PCP_SECURE_SOCKETS=enforce export PCP_TLSCONF_PATH=~/.pcp/tls.conf", "pminfo --fetch --host pcps://localhost kernel.all.load kernel.all.load inst [1 or \"1 minute\"] value 1.26 inst [5 or \"5 minute\"] value 1.29 inst [15 or \"15 minute\"] value 1.28", "dnf install pcp-pmda-redis", "cd /var/lib/pcp/pmdas/redis && ./Install", "pmrep :pmproxy backlog inflight reqs/s resp/s wait req err resp err changed throttled byte count count/s count/s s/s count/s count/s count/s count/s 14:59:08 0 0 N/A N/A N/A N/A N/A N/A N/A 14:59:09 0 0 2268.9 2268.9 28 0 0 2.0 4.0 14:59:10 0 0 0.0 0.0 0 0 0 0.0 0.0 14:59:11 0 0 0.0 0.0 0 0 0 0.0 0.0", "pminfo -ftd pmproxy.redis pmproxy.redis.responses.wait [wait time for responses] Data Type: 64-bit unsigned int InDom: PM_INDOM_NULL 0xffffffff Semantics: counter Units: microsec value 546028367374 pmproxy.redis.responses.error [number of error responses] Data Type: 64-bit unsigned int InDom: PM_INDOM_NULL 0xffffffff Semantics: counter Units: count value 1164 [...] pmproxy.redis.requests.inflight.bytes [bytes allocated for inflight requests] Data Type: 64-bit int InDom: PM_INDOM_NULL 0xffffffff Semantics: discrete Units: byte value 0 pmproxy.redis.requests.inflight.total [inflight requests] Data Type: 64-bit unsigned int InDom: PM_INDOM_NULL 0xffffffff Semantics: discrete Units: count value 0 [...]", "pminfo -ftd redis redis.redis_build_id [Build ID] Data Type: string InDom: 24.0 0x6000000 Semantics: discrete Units: count inst [0 or \"localhost:6379\"] value \"87e335e57cffa755\" redis.total_commands_processed [Total number of commands processed by the server] Data Type: 64-bit unsigned int InDom: 24.0 0x6000000 Semantics: counter Units: count inst [0 or \"localhost:6379\"] value 595627069 [...] redis.used_memory_peak [Peak memory consumed by Redis (in bytes)] Data Type: 32-bit unsigned int InDom: 24.0 0x6000000 Semantics: instant Units: count inst [0 or \"localhost:6379\"] value 572234920 [...]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/setting-up-pcp_monitoring-and-managing-system-status-and-performance
probe::ioblock.request
probe::ioblock.request Name probe::ioblock.request - Fires whenever making a generic block I/O request. Synopsis Values None Description name - name of the probe point devname - block device name ino - i-node number of the mapped file sector - beginning sector for the entire bio flags - see below BIO_UPTODATE 0 ok after I/O completion BIO_RW_BLOCK 1 RW_AHEAD set, and read/write would block BIO_EOF 2 out-out-bounds error BIO_SEG_VALID 3 nr_hw_seg valid BIO_CLONED 4 doesn't own data BIO_BOUNCED 5 bio is a bounce bio BIO_USER_MAPPED 6 contains user pages BIO_EOPNOTSUPP 7 not supported rw - binary trace for read/write request vcnt - bio vector count which represents number of array element (page, offset, length) which make up this I/O request idx - offset into the bio vector array phys_segments - number of segments in this bio after physical address coalescing is performed hw_segments - number of segments after physical and DMA remapping hardware coalescing is performed size - total size in bytes bdev - target block device bdev_contains - points to the device object which contains the partition (when bio structure represents a partition) p_start_sect - points to the start sector of the partition structure of the device Context The process makes block I/O request
[ "ioblock.request" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ioblock-request
Chapter 1. Configuring and deploying a Red Hat OpenStack Platform hyperconverged infrastructure
Chapter 1. Configuring and deploying a Red Hat OpenStack Platform hyperconverged infrastructure 1.1. Hyperconverged infrastructure overview Red Hat OpenStack Platform (RHOSP) hyperconverged infrastructures (HCI) consist of hyperconverged nodes. In RHOSP HCI, the Compute and Storage services are colocated on these hyperconverged nodes for optimized resource use. You can deploy an overcloud with only hyperconverged nodes, or a mixture of hyperconverged nodes with normal Compute and Red Hat Ceph Storage nodes. Note You must use Red Hat Ceph Storage as the storage provider. Tip Use BlueStore as the back end for HCI deployments to make use of the BlueStore memory handling features. Hyperconverged infrastructures are built using a variation of the deployment process described in Deploying Red Hat Ceph and OpenStack together with director . In this deployment scenario, RHOSP director deploys your cloud environment, which director calls the overcloud, and Red Hat Ceph Storage. You manage and scale the Ceph cluster itself separate from the overcloud configuration. Important Instance HA is not supported on RHOSP HCI environments. To use Instance HA in your RHOSP HCI environment, you must designate a subset of the Compute nodes with the ComputeInstanceHA role to use the Instance HA. Red Hat Ceph Storage services must not be hosted on the Compute nodes that host Instance HA. Red Hat OpenStack Platform 17.1 only supports Red Hat Ceph Storage 6 for new deployments. Red Hat Ceph Storage 5 is not supported in new deployment scenarios. Important All HCI nodes in supported Hyperconverged Infrastructure environments must use the same version of Red Hat Enterprise Linux as the version used by the Red Hat OpenStack Platform controllers. If you wish to use multiple Red Hat Enterprise versions in a hybrid state on HCI nodes in the same Hyperconverged Infrastructure environment, contact the Red Hat Customer Experience and Engagement team to discuss a support exception. For HCI configuration guidance, see Configuration guidance .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_hyperconverged_infrastructure/assembly_configuring-and-deploying-rhosp-hci_osp-hci
Chapter 16. Provisioning Cloud Instances on Microsoft Azure Resource Manager
Chapter 16. Provisioning Cloud Instances on Microsoft Azure Resource Manager Red Hat Satellite can interact with Microsoft Azure Resource Manager, including creating new virtual machines and controlling their power management states. Only image-based provisioning is supported for creating Azure hosts. This includes provisioning using Marketplace images, custom images, and shared image gallery. For more information about Azure Resource Manager concepts, see Azure Resource Manager documentation . Prerequisites You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in the Content Management Guide . Provide an activation key for host registration. For more information, see Creating An Activation Key in the Content Management guide. Ensure that you have the correct permissions to create an Azure Active Directory application. For more information, see Check Azure AD permissions in the Microsoft identity platform (Azure Active Directory for developers) documentation. You must create and configure an Azure Active Directory application and service principle to obtain Application or client ID, Directory or tenant ID, and Client Secret. For more information, see Use the portal to create an Azure AD application and service principal that can access resources in the Microsoft identity platform (Azure Active Directory for developers) documentation. Optional: If you want to use Puppet with Azure hosts, navigate to Administer > Settings > Puppet and enable the Use UUID for certificates setting to configure Puppet to use consistent Puppet certificate IDs. Based on your needs, associate a finish or user_data provisioning template with the operating system you want to use. For more information about provisioning templates, see Provisioning Templates . Optional: If you want the virtual machine to use a static private IP address, create a subnet in Satellite with the Network Address field matching the Azure subnet address. Before creating RHEL BYOS images, you must accept the image terms either in the Azure CLI or Portal so that the image can be used to create and manage virtual machines for your subscription. 16.1. Adding a Microsoft Azure Resource Manager Connection to Satellite Server Use this procedure to add Microsoft Azure as a compute resource in Satellite. Note that you must add a separate compute resource for each Microsoft Azure region that you want to use. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource . In the Name field, enter a name for the compute resource. From the Provider list, select Azure Resource Manager . Optional: In the Description field, enter a description for the resource. By default, the Cloud is set to Public/Standard. Azure Government Cloud supports the following regions: US Government China Germany In the Client ID field, enter your Application or client ID. In the Client Secret field, enter your client secret. In the Subscription ID field, enter your subscription ID. In the Tenant ID field, enter your Directory or tenant ID. Click Load Regions . This tests if your connection to Azure Resource Manager is successful and loads the regions available in your subscription. From the Azure Region list, select the Azure region to use. Click Submit . CLI procedure Use hammer compute-resource create to add an Azure compute resource to Satellite. Note that the value for the --region option must be in lowercase and must not contain special characters. Important If you are using Azure Government Cloud then you must pass in the --cloud parameter. The values for the cloud parameter are: Name of Azure Government Cloud Value for hammer --cloud US Government azureusgovernment China azurechina Germany azuregermancloud 16.2. Adding Microsoft Azure Resource Manager Images to Satellite Server To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your Satellite Server. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and click the name of the Microsoft Azure Resource Manager connection. Click Create Image . In the Name field, enter a name for the image. From the Operating System list, select the base operating system of the image. From the Architecture list, select the operating system architecture. In the Username field, enter the SSH user name for image access. You cannot use the root user. Optional: In the Password field, enter a password to authenticate with. In the Azure Image Name field, enter an image name in the format prefix://UUID . For a custom image, use the prefix custom . For example, custom://image-name . For a shared gallery image, use the prefix gallery . For example, gallery://image-name . For public and RHEL Bring Your Own Subscription (BYOS) images, use the prefix marketplace . For example, marketplace://OpenLogicCentOS:7.5:latest . For more information, see Find Linux VM images in the Azure Marketplace with the Azure CLI . Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data. Click Submit to save the image details. CLI procedure Create the image with the hammer compute-resource image create command. Note that the username that you enter for the image must be the same that you use when you create a host with this image. The --password option is optional when creating an image. You cannot use the root user. 16.3. Adding Microsoft Azure Resource Manager Details to a Compute Profile Use this procedure to add Microsoft Azure hardware settings to a compute profile. When you create a host on Microsoft Azure using this compute profile, these settings are automatically populated. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles . In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile , enter a Name , and click Submit . Click the name of the Azure compute resource. From the Resource group list, select the resource group to provision to. From the VM Size list, select a size of a virtual machine to provision. From the Platform list, select Linux . In the Username field, enter a user name to authenticate with. Note that the username that you enter for compute profile must be the same that you use when creating an image. To authenticate the user, use one of the following options: To authenticate using a password, enter a password in the Password field. To authenticate using an SSH key, enter an SSH key in the SSH Key field. Optional: If you want the virtual machine to use a premium virtual machine disk, select the Premium OS Disk checkbox. From the OS Disk Caching list, select the disc caching setting. Optional: In the Custom Script Command field, enter commands to perform on the virtual machine when the virtual machine is provisioned. Optional: If you want to run custom scripts when provisioning finishes, in the Comma separated file URIs field, enter comma-separated file URIs of scripts to use. The scripts must contain sudo at the beginning because Red Hat Satellite downloads files to the /var/lib/waagent/custom-script/download/0/ directory on the host and scripts require sudo privileges to be executed. Optional: You can add a NVIDIA Driver by selecting the NVIDIA driver / CUDA checkbox. For more information, refer to the following Microsoft Azure documentation: NVIDIA GPU Driver Extension for Linux NVIDIA GPU Driver Extension for Windows Optional: If you want to create an additional volume on the VM, click the Add Volume button, enter the Size in GB and select the Data Disk Caching method. Note that the maximum number of these disks depends on the VM Size selected. For more information on Microsoft Azure VM storage requirements, see the Microsoft Azure documentation . Click Add Interface . Important The maximum number of interfaces depends on the VM Size selected. For more information, see the Microsoft Azure documentation link above. From the Azure Subnet list, select the Azure subnet to provision to. From the Public IP list, select the public IP setting. Optional: If you want the virtual machine to use a static private IP, select the Static Private IP checkbox. Click Submit . CLI procedure Create a compute profile to use with the Azure Resource Manager compute resource: Add Azure details to the compute profile. With the username setting, enter the SSH user name for image access. Note that the username that you enter for compute profile must be the same that you use when creating an image. Optional: If you want to run scripts on the virtual machine after provisioning, specify the following settings: To enter the script directly, with the script_command setting, enter a command to be executed on the provisioned virtual machine. To run a script from a URI, with the script_uris setting, enter comma-separated file URIs of scripts to use. The scripts must contain sudo at the beginning because Red Hat Satellite downloads files to the /var/lib/waagent/custom-script/download/0/ directory on the host and therefore scripts require sudo privileges to be executed. 16.4. Creating Image-based Hosts on Microsoft Azure Resource Manager In Satellite, you can use Microsoft Azure Resource Manager provisioning to create hosts from an existing image. The new host entry triggers the Microsoft Azure Resource Manager server to create the instance using the pre-existing image as a basis for the new volume. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Click the Organization and Location tabs to ensure that the provisioning context is automatically set to the current context. From the Host Group list, select the host group that you want to use to populate the form. From the Deploy on list, select the Microsoft Azure Resource Manager connection. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings. From the Lifecycle Environment list, select the environment. Click the Interfaces tab and click Edit on the host's interface. Verify that the fields are automatically populated, particularly the following items: The Name from the Host tab becomes the DNS name . The MAC address field is blank. Microsoft Azure Resource Manager assigns a MAC address to the host during provisioning. The Azure Subnet field is populated with the required Azure subnet. The Managed , Primary , and Provision options are automatically selected for the first interface on the host. If not, select them. Optional: If you want to use a static private IP address, from the IPv4 Subnet list select the Satellite subnet with the Network Address field matching the Azure subnet address. In the IPv4 Address field, enter an IPv4 address within the range of your Azure subnet. Click the Operating System tab, and confirm that all fields automatically contain values. For Provisioning Method , ensure Image Based is selected. From the Image list, select the Azure Resource Manager image that you want to use for provisioning. In the Root Password field, enter the root password to authenticate with. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host entry. CLI procedure Create the host with the hammer host create command and include --provision-method image . Replace the values in the following example with the appropriate values for your environment. For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command. 16.5. Deleting a VM on Microsoft Azure You can delete VMs running on Microsoft Azure from within Satellite. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your Microsoft Azure provider. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the Microsoft Azure compute resource while retaining any associated hosts within Satellite. If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually. 16.6. Uninstalling Microsoft Azure Plugin If you have previously installed the Microsoft Azure plugin but don't use it anymore to manage and deploy hosts to Azure, you can uninstall it from your Satellite Server. Procedure Uninstall the Azure compute resource provider from your Satellite Server: For Red Hat Enterprise Linux 8: For Red Hat Enterprise Linux 7: Optional: In the Satellite web UI, navigate to Administer > About and select the Available Providers tab to verify the removal of the Microsoft Azure plugin.
[ "hammer compute-resource create --app-ident My_Client_ID --name My_Compute_Resource_Name --provider azurerm --region \" My_Region \" --secret-key My_Client_Secret --sub-id My_Subscription_ID --tenant My_Tenant_ID", "hammer compute-resource image create --name Azure_image_name --compute-resource azure_cr_name --uuid ' marketplace://RedHat:RHEL:7-RAW:latest ' --username ' azure_username ' --user-data no", "hammer compute-profile create --name compute_profile_name", "hammer compute-profile values create --compute-attributes=\"resource_group= resource_group ,vm_size= Standard_B1s ,username= azure_user ,password= azure_password ,platform=Linux,script_command=touch /var/tmp/text.txt\" --compute-profile \" compute_profile_name \" --compute-resource azure_cr_name --interface=\"compute_public_ip=Dynamic,compute_network=mysubnetID,compute_private_ip=false\" --volume=\"disk_size_gb= 5 ,data_disk_caching= None \"", "hammer host create --architecture x86_64 --compute-profile \" My_Compute_Profile \" --compute-resource \" My_Azure_Compute_Resource \" --domain \" My_Domain \" --image \" My_Azure_Image \" --location \" My_Location \" --name=\" Azure_VM \" --operatingsystem \" My_Operating_System \" --organization \" My_Organization \" --provision-method \"image\"", "satellite-maintain packages remove rubygem-foreman_azure_rm rubygem-ms_rest_azure satellite-installer --no-enable-foreman-plugin-azure", "satellite-maintain packages remove -y tfm-rubygem-foreman_azure_rm tfm-rubygem-ms_rest_azure satellite-installer --no-enable-foreman-plugin-azure" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/Provisioning_Cloud_Instances_on_Microsoft_Azure_Resource_Manager_provisioning
Chapter 17. Service [v1]
Chapter 17. Service [v1] Description Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy. Type object 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ServiceSpec describes the attributes that a user creates on a service. status object ServiceStatus represents the current status of a service. 17.1.1. .spec Description ServiceSpec describes the attributes that a user creates on a service. Type object Property Type Description allocateLoadBalancerNodePorts boolean allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. clusterIP string clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies clusterIPs array (string) ClusterIPs is a list of IP addresses assigned to this service, and are usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be empty) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. If this field is not specified, it will be initialized from the clusterIP field. If this field is specified, clients must ensure that clusterIPs[0] and clusterIP have the same value. This field may hold a maximum of two entries (dual-stack IPs, in either order). These IPs must correspond to the values of the ipFamilies field. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies externalIPs array (string) externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system. externalName string externalName is the external reference that discovery mechanisms will return as an alias for this service (e.g. a DNS CNAME record). No proxying will be involved. Must be a lowercase RFC-1123 hostname ( https://tools.ietf.org/html/rfc1123 ) and requires type to be "ExternalName". externalTrafficPolicy string externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service's "externally-facing" addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to "Local", the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get "Cluster" semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node. Possible enum values: - "Cluster" routes traffic to all endpoints. - "Local" preserves the source IP of the traffic by routing only to endpoints on the same node as the traffic was received on (dropping the traffic if there are no local endpoints). healthCheckNodePort integer healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set. internalTrafficPolicy string InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to "Local", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). ipFamilies array (string) IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are "IPv4" and "IPv6". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to "headless" services. This field will be wiped when updating a Service to type ExternalName. This field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. ipFamilyPolicy string IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be "SingleStack" (a single IP family), "PreferDualStack" (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or "RequireDualStack" (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName. loadBalancerClass string loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type. loadBalancerIP string Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version. loadBalancerSourceRanges array (string) If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/ ports array The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies ports[] object ServicePort contains information on service's port. publishNotReadyAddresses boolean publishNotReadyAddresses indicates that any agent which deals with endpoints for this Service should disregard any indications of ready/not-ready. The primary use case for setting this field is for a StatefulSet's Headless Service to propagate SRV DNS records for its Pods for the purpose of peer discovery. The Kubernetes controllers that generate Endpoints and EndpointSlice resources for Services interpret this to mean that all endpoints are considered "ready" even if the Pods themselves are not. Agents which consume only Kubernetes generated endpoints through the Endpoints or EndpointSlice resources can safely assume this behavior. selector object (string) Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/ sessionAffinity string Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies Possible enum values: - "ClientIP" is the Client IP based. - "None" - no session affinity. sessionAffinityConfig object SessionAffinityConfig represents the configurations of session affinity. type string type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object or EndpointSlice objects. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the same endpoints as the clusterIP. "ExternalName" aliases this service to the specified externalName. Several other fields do not apply to ExternalName services. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types Possible enum values: - "ClusterIP" means a service will only be accessible inside the cluster, via the cluster IP. - "ExternalName" means a service consists of only a reference to an external name that kubedns or equivalent will return as a CNAME record, with no exposing or proxying of any pods involved. - "LoadBalancer" means a service will be exposed via an external load balancer (if the cloud provider supports it), in addition to 'NodePort' type. - "NodePort" means a service will be exposed on one port of every node, in addition to 'ClusterIP' type. 17.1.2. .spec.ports Description The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies Type array 17.1.3. .spec.ports[] Description ServicePort contains information on service's port. Type object Required port Property Type Description appProtocol string The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names ). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. name string The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service. nodePort integer The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If a value is specified, in-range, and not in use it will be used, otherwise the operation will fail. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type from NodePort to ClusterIP). More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport port integer The port that will be exposed by this service. protocol string The IP protocol for this port. Supports "TCP", "UDP", and "SCTP". Default is TCP. Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. targetPort IntOrString Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service 17.1.4. .spec.sessionAffinityConfig Description SessionAffinityConfig represents the configurations of session affinity. Type object Property Type Description clientIP object ClientIPConfig represents the configurations of Client IP based session affinity. 17.1.5. .spec.sessionAffinityConfig.clientIP Description ClientIPConfig represents the configurations of Client IP based session affinity. Type object Property Type Description timeoutSeconds integer timeoutSeconds specifies the seconds of ClientIP type session sticky time. The value must be >0 && ⇐86400(for 1 day) if ServiceAffinity == "ClientIP". Default value is 10800(for 3 hours). 17.1.6. .status Description ServiceStatus represents the current status of a service. Type object Property Type Description conditions array (Condition) Current service state loadBalancer object LoadBalancerStatus represents the status of a load-balancer. 17.1.7. .status.loadBalancer Description LoadBalancerStatus represents the status of a load-balancer. Type object Property Type Description ingress array Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. ingress[] object LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point. 17.1.8. .status.loadBalancer.ingress Description Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. Type array 17.1.9. .status.loadBalancer.ingress[] Description LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point. Type object Property Type Description hostname string Hostname is set for load-balancer ingress points that are DNS based (typically AWS load-balancers) ip string IP is set for load-balancer ingress points that are IP based (typically GCE or OpenStack load-balancers) ports array Ports is a list of records of service ports If used, every port defined in the service should have an entry in it ports[] object 17.1.10. .status.loadBalancer.ingress[].ports Description Ports is a list of records of service ports If used, every port defined in the service should have an entry in it Type array 17.1.11. .status.loadBalancer.ingress[].ports[] Description Type object Required port protocol Property Type Description error string Error is to record the problem with the service port The format of the error shall comply with the following rules: - built-in error values shall be specified in this file and those shall use CamelCase names - cloud provider specific error values must have names that comply with the format foo.example.com/CamelCase. port integer Port is the port number of the service port of which status is recorded here protocol string Protocol is the protocol of the service port of which status is recorded here The supported values are: "TCP", "UDP", "SCTP" Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 17.2. API endpoints The following API endpoints are available: /api/v1/services GET : list or watch objects of kind Service /api/v1/watch/services GET : watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/services DELETE : delete collection of Service GET : list or watch objects of kind Service POST : create a Service /api/v1/watch/namespaces/{namespace}/services GET : watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/services/{name} DELETE : delete a Service GET : read the specified Service PATCH : partially update the specified Service PUT : replace the specified Service /api/v1/watch/namespaces/{namespace}/services/{name} GET : watch changes to an object of kind Service. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/services/{name}/status GET : read status of the specified Service PATCH : partially update status of the specified Service PUT : replace status of the specified Service 17.2.1. /api/v1/services Table 17.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Service Table 17.2. HTTP responses HTTP code Reponse body 200 - OK ServiceList schema 401 - Unauthorized Empty 17.2.2. /api/v1/watch/services Table 17.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. Table 17.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.3. /api/v1/namespaces/{namespace}/services Table 17.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 17.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Service Table 17.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 17.8. Body parameters Parameter Type Description body DeleteOptions schema Table 17.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Service Table 17.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 17.11. HTTP responses HTTP code Reponse body 200 - OK ServiceList schema 401 - Unauthorized Empty HTTP method POST Description create a Service Table 17.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.13. Body parameters Parameter Type Description body Service schema Table 17.14. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 202 - Accepted Service schema 401 - Unauthorized Empty 17.2.4. /api/v1/watch/namespaces/{namespace}/services Table 17.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 17.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. Table 17.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.5. /api/v1/namespaces/{namespace}/services/{name} Table 17.18. Global path parameters Parameter Type Description name string name of the Service namespace string object name and auth scope, such as for teams and projects Table 17.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Service Table 17.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 17.21. Body parameters Parameter Type Description body DeleteOptions schema Table 17.22. HTTP responses HTTP code Reponse body 200 - OK Service schema 202 - Accepted Service schema 401 - Unauthorized Empty HTTP method GET Description read the specified Service Table 17.23. HTTP responses HTTP code Reponse body 200 - OK Service schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Service Table 17.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.25. Body parameters Parameter Type Description body Patch schema Table 17.26. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Service Table 17.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.28. Body parameters Parameter Type Description body Service schema Table 17.29. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty 17.2.6. /api/v1/watch/namespaces/{namespace}/services/{name} Table 17.30. Global path parameters Parameter Type Description name string name of the Service namespace string object name and auth scope, such as for teams and projects Table 17.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Service. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 17.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.7. /api/v1/namespaces/{namespace}/services/{name}/status Table 17.33. Global path parameters Parameter Type Description name string name of the Service namespace string object name and auth scope, such as for teams and projects Table 17.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Service Table 17.35. HTTP responses HTTP code Reponse body 200 - OK Service schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Service Table 17.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.37. Body parameters Parameter Type Description body Patch schema Table 17.38. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Service Table 17.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.40. Body parameters Parameter Type Description body Service schema Table 17.41. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_apis/service-v1
Chapter 50. Handling Exceptions
Chapter 50. Handling Exceptions Abstract When possible, exceptions caught by a resource method should cause a useful error to be returned to the requesting consumer. JAX-RS resource methods can throw a WebApplicationException exception. You can also provide ExceptionMapper<E> implementations to map exceptions to appropriate responses. 50.1. Overview of JAX-RS Exception Classes Overview In JAX-RS 1.x, the only available exception class is WebApplicationException . Since JAX-WS 2.0, however, a number of additional JAX-RS exception classes have been defined. JAX-RS runtime level exceptions The following exceptions are meant to be thrown by the JAX-RS runtime only (that is, you must not throw these exceptions from your application level code): ProcessingException (JAX-RS 2.0 only) The javax.ws.rs.ProcessingException can be thrown during request processing or during response processing in the JAX-RS runtime. For example, this error could be thrown due to errors in the filter chain or interceptor chain processing. ResponseProcessingException (JAX-RS 2.0 only) The javax.ws.rs.client.ResponseProcessingException is a subclass of ProcessingException , which can be thrown when errors occur in the JAX-RS runtime on the client side . JAX-RS application level exceptions The following exceptions are intended to be thrown (and caught) in your application level code: WebApplicationException The javax.ws.rs.WebApplicationException is a generic application level JAX-RS exception, which can be thrown in application code on the server side. This exception type can encapsulate a HTTP status code, an error message, and (optionally) a response message. For details, see Section 50.2, "Using WebApplicationException exceptions to report" . ClientErrorException (JAX-RS 2.0 only) The javax.ws.rs.ClientErrorException exception class inherits from WebApplicationException and is used to encapsulate HTTP 4xx status codes. ServerErrorException (JAX-RS 2.0 only) The javax.ws.rs.ServerErrorException exception class inherits from WebApplicationException and is used to encapsulate HTTP 5xx status codes. RedirectionException (JAX-RS 2.0 only) The javax.ws.rs.RedirectionException exception class inherits from WebApplicationException and is used to encapsulate HTTP 3xx status codes. 50.2. Using WebApplicationException exceptions to report Overview The JAX-RS API introduced the WebApplicationException runtime exception to provide an easy way for resource methods to create exceptions that are appropriate for RESTful clients to consume. WebApplicationException exceptions can include a Response object that defines the entity body to return to the originator of the request. It also provides a mechanism for specifying the HTTP status code to be returned to the client if no entity body is provided. Creating a simple exception The easiest means of creating a WebApplicationException exception is to use either the no argument constructor or the constructor that wraps the original exception in a WebApplicationException exception. Both constructors create a WebApplicationException with an empty response. When an exception created by either of these constructors is thrown, the runtime returns a response with an empty entity body and a status code of 500 Server Error . Setting the status code returned to the client When you want to return an error code other than 500 , you can use one of the four WebApplicationException constructors that allow you to specify the status. Two of these constructors, shown in Example 50.1, "Creating a WebApplicationException with a status code" , take the return status as an integer. Example 50.1. Creating a WebApplicationException with a status code WebApplicationException int status WebApplicationException java.lang.Throwable cause int status The other two, shown in Example 50.2, "Creating a WebApplicationException with a status code" take the response status as an instance of Response.Status . Example 50.2. Creating a WebApplicationException with a status code WebApplicationException javax.ws.rs.core.Response.Status status WebApplicationException java.lang.Throwable cause javax.ws.rs.core.Response.Status status When an exception created by either of these constructors is thrown, the runtime returns a response with an empty entity body and the specified status code. Providing an entity body If you want a message to be sent along with the exception, you can use one of the WebApplicationException constructors that takes a Response object. The runtime uses the Response object to create the response sent to the client. The entity stored in the response is mapped to the entity body of the message and the status field of the response is mapped to the HTTP status of the message. Example 50.3, "Sending a message with an exception" shows code for returning a text message to a client containing the reason for the exception and sets the HTTP message status to 409 Conflict . Example 50.3. Sending a message with an exception Extending the generic exception It is possible to extend the WebApplicationException exception. This would allow you to create custom exceptions and eliminate some boiler plate code. Example 50.4, "Extending WebApplicationException" shows a new exception that creates a similar response to the code in Example 50.3, "Sending a message with an exception" . Example 50.4. Extending WebApplicationException 50.3. JAX-RS 2.0 Exception Types Overview JAX-RS 2.0 introduces a number of specific HTTP exception types that you can throw (and catch) in your application code (in addition to the existing WebApplicationException exception type). These exception types can be used to wrap standard HTTP status codes, either for HTTP client errors (HTTP 4xx status codes), or HTTP server errors (HTTP 5xx status codes). Exception hierarchy Figure 50.1, "JAX-RS 2.0 Application Exception Hierarchy" shows the hierarchy of application level exceptions supported in JAX-RS 2.0. Figure 50.1. JAX-RS 2.0 Application Exception Hierarchy WebApplicationException class The javax.ws.rs.WebApplicationException exception class (which has been available since JAX-RS 1.x) is at the base of the JAX-RS 2.0 exception hierarchy, and is described in detail in Section 50.2, "Using WebApplicationException exceptions to report" . ClientErrorException class The javax.ws.rs.ClientErrorException exception class is used to encapsulate HTTP client errors (HTTP 4xx status codes). In your application code, you can throw this exception or one of its subclasses. ServerErrorException class The javax.ws.rs.ServerErrorException exception class is used to encapsulate HTTP server errors (HTTP 5xx status codes). In your application code, you can throw this exception or one of its subclasses. RedirectionException class The javax.ws.rs.RedirectionException exception class is used to encapsulate HTTP request redirection (HTTP 3xx status codes). The constructors of this class take a URI argument, which specifies the redirect location. The redirect URI is accessible through the getLocation() method. Normally, HTTP redirection is transparent on the client side. Client exception subclasses You can raise the following HTTP client exceptions (HTTP 4xx status codes) in a JAX-RS 2.0 application: BadRequestException Encapsulates the 400 Bad Request HTTP error status. ForbiddenException Encapsulates the 403 Forbidden HTTP error status. NotAcceptableException Encapsulates the 406 Not Acceptable HTTP error status. NotAllowedException Encapsulates the 405 Method Not Allowed HTTP error status. NotAuthorizedException Encapsulates the 401 Unauthorized HTTP error status. This exception could be raised in either of the following cases: The client did not send the required credentials (in a HTTP Authorization header), or The client presented the credentials, but the credentials were not valid. NotFoundException Encapsulates the 404 Not Found HTTP error status. NotSupportedException Encapsulates the 415 Unsupported Media Type HTTP error status. Server exception subclasses You can raise the following HTTP server exceptions (HTTP 5xx status codes) in a JAX-RS 2.0 application: InternalServerErrorException Encapsulates the 500 Internal Server Error HTTP error status. ServiceUnavailableException Encapsulates the 503 Service Unavailable HTTP error status. 50.4. Mapping Exceptions to Responses Overview There are instances where throwing a WebApplicationException exception is impractical or impossible. For example, you may not want to catch all possible exceptions and then create a WebApplicationException for them. You may also want to use custom exceptions that make working with your application code easier. To handle these cases the JAX-RS API allows you to implement a custom exception provider that generates a Response object to send to a client. Custom exception providers are created by implementing the ExceptionMapper<E> interface. When registered with the Apache CXF runtime, the custom provider will be used whenever an exception of type E is thrown. How exception mappers are selected Exception mappers are used in two cases: When any exception or one of its subclasses, is thrown, the runtime will check for an appropriate exception mapper. An exception mapper is selected if it handles the specific exception thrown. If there is not an exception mapper for the specific exception that was thrown, the exception mapper for the nearest superclass of the exception is selected. By default, a WebApplicationException will be handled by the default mapper, WebApplicationExceptionMapper . Even if an additional custom mapper is registered, which could potentially handle a WebApplicationException exception (for example, a custom RuntimeException mapper), the custom mapper will not be used and the WebApplicationExceptionMapper will be used instead. This behaviour can be changed, however, by setting the default.wae.mapper.least.specific property to true on a Message object. When this option is enabled, the default WebApplicationExceptionMapper is relegated to the lowest priority, so that it becomes possible to handle a WebApplicationException exception with a custom exception mapper. For example, if this option is enabled, it would be possible to catch a WebApplicationException exception by registering a custom RuntimeException mapper. See the section called "Registering an exception mapper for WebApplicationException" . If an exception mapper is not found for an exception, the exception is wrapped in an ServletException exception and passed onto the container runtime. The container runtime will then determine how to handle the exception. Implementing an exception mapper Exception mappers are created by implementing the javax.ws.rs.ext.ExceptionMapper<E> interface. As shown in Example 50.5, "Exception mapper interface" , the interface has a single method, toResponse() , that takes the original exception as a parameter and returns a Response object. Example 50.5. Exception mapper interface The Response object created by the exception mapper is processed by the runtime just like any other Response object. The resulting response to the consumer will contain the status, headers, and entity body encapsulated in the Response object. Exception mapper implementations are considered providers by the runtime. Therefore they must be decorated with the @Provider annotation. If an exception occurs while the exception mapper is building the Response object, the runtime will send a response with a status of 500 Server Error to the consumer. Example 50.6, "Mapping an exception to a response" shows an exception mapper that intercepts Spring AccessDeniedException exceptions and generates a response with a 403 Forbidden status and an empty entity body. Example 50.6. Mapping an exception to a response The runtime will catch any AccessDeniedException exceptions and create a Response object with no entity body and a status of 403 . The runtime will then process the Response object as it would for a normal response. The result is that the consumer will receive an HTTP response with a status of 403 . Registering an exception mapper Before a JAX-RS application can use an exception mapper, the exception mapper must be registered with the runtime. Exception mappers are registered with the runtime using the jaxrs:providers element in the application's configuration file. The jaxrs:providers element is a child of the jaxrs:server element and contains a list of bean elements. Each bean element defines one exception mapper. Example 50.7, "Registering exception mappers with the runtime" shows a JAX-RS server configured to use a custom exception mapper, SecurityExceptionMapper . Example 50.7. Registering exception mappers with the runtime Registering an exception mapper for WebApplicationException Registering an exception mapper for a WebApplicationException exception is a special case, because this exception type is automatically handled by the default WebApplicationExceptionMapper . Normally, even when you register a custom mapper that you would expect to handle WebApplicationException , it will continue to be handled by the default WebApplicationExceptionMapper . To change this default behaviour, you need to set the default.wae.mapper.least.specific property to true . For example, the following XML code shows how to enable the default.wae.mapper.least.specific property on a JAX-RS endpoint: You can also set the default.wae.mapper.least.specific property in an interceptor, as shown in the following example:
[ "errors indexterm:[WebApplicationException]", "import javax.ws.rs.core.Response; import javax.ws.rs.WebApplicationException; import org.apache.cxf.jaxrs.impl.ResponseBuilderImpl; ResponseBuilderImpl builder = new ResponseBuilderImpl(); builder.status(Response.Status.CONFLICT); builder.entity(\"The requested resource is conflicted.\"); Response response = builder.build(); throw WebApplicationException(response);", "public class ConflicteddException extends WebApplicationException { public ConflictedException(String message) { ResponseBuilderImpl builder = new ResponseBuilderImpl(); builder.status(Response.Status.CONFLICT); builder.entity(message); super(builder.build()); } } throw ConflictedException(\"The requested resource is conflicted.\");", "public interface ExceptionMapper<E extends java.lang.Throwable> { public Response toResponse(E exception); }", "import javax.ws.rs.core.Response; import javax.ws.rs.ext.ExceptionMapper; import org.springframework.security.AccessDeniedException; @Provider public class SecurityExceptionMapper implements ExceptionMapper<AccessDeniedException> { public Response toResponse(AccessDeniedException exception) { return Response.status(Response.Status.FORBIDDEN).build(); } }", "<beans ...> <jaxrs:server id=\"customerService\" address=\"/\"> <jaxrs:providers> <bean id=\"securityException\" class=\"com.bar.providers.SecurityExceptionMapper\"/> </jaxrs:providers> </jaxrs:server> </beans>", "<beans ...> <jaxrs:server id=\"customerService\" address=\"/\"> <jaxrs:providers> <bean id=\"securityException\" class=\"com.bar.providers.SecurityExceptionMapper\"/> </jaxrs:providers> <jaxrs:properties> <entry key=\"default.wae.mapper.least.specific\" value=\"true\"/> </jaxrs:properties> </jaxrs:server> </beans>", "// Java public void handleMessage(Message message) { m.put(\"default.wae.mapper.least.specific\", true);" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/restexceptions
Keeping Red Hat OpenStack Platform Updated
Keeping Red Hat OpenStack Platform Updated Red Hat OpenStack Platform 16.0 Performing minor updates of Red Hat OpenStack Platform OpenStack Documentation Team [email protected] Abstract This document provides the procedure to update your Red Hat OpenStack Platform 16.0 (Train) environment. This document assumes you will update a containerized OpenStack Platform deployment installed on Red Hat Enterprise Linux 8.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/keeping_red_hat_openstack_platform_updated/index
Chapter 3. Post-deployment IPv6 operations
Chapter 3. Post-deployment IPv6 operations After you deploy the overcloud with IPv6 networking, you must perform some additional configuration. Prerequisites A successful undercloud installation. For more information, see Installing director . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . 3.1. Creating an IPv6 project network on the overcloud The overcloud requires an IPv6-based Project network for instances. Source the overcloudrc file and create an initial Project network in neutron . Prerequisites A successful undercloud installation. For more information, see Installing director . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. Procedure Source the overcloud credentials file: Create a network and subnet: This creates a basic neutron network called default . Verification steps Verify that the network was created successfully: 3.2. Creating an IPv6 public network on the overcloud After you configure the node interfaces to use the External network, you must create this network on the overcloud to enable network access. Prerequisites A successful undercloud installation. For more information, see Installing director . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. Procedure Create an external network and subnet: This command creates a network called public that provides an allocation pool of over 65000 IPv6 addresses for our instances. Create a router to route instance traffic to the External network.
[ "source ~/overcloudrc", "openstack network create default --external --provider-physical-network datacentre --provider-network-type vlan --provider-segment 101 openstack subnet create default --subnet-range 2001:db8:fd00:6000::/64 --ipv6-address-mode slaac --ipv6-ra-mode slaac --ip-version 6 --network default", "openstack network list openstack subnet list", "openstack network create public --external --provider-physical-network datacentre --provider-network-type vlan --provider-segment 100 openstack subnet create public --network public --subnet-range 2001:db8:0:2::/64 --ip-version 6 --gateway 2001:db8::1 --allocation-pool start=2001:db8:0:2::2,end=2001:db8:0:2::ffff --ipv6-address-mode slaac --ipv6-ra-mode slaac", "openstack router create public-router openstack router set public-router --external-gateway public" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_ipv6_networking_for_the_overcloud/assembly_post-deployment-ipv6-operations
Chapter 14. Monitoring Data Grid services
Chapter 14. Monitoring Data Grid services Data Grid exposes metrics that can be used by Prometheus and Grafana for monitoring and visualizing the cluster state. Note This documentation explains how to set up monitoring on OpenShift Container Platform. If you're working with community Prometheus deployments, you might find these instructions useful as a general guide. However you should refer to the Prometheus documentation for installation and usage instructions. See the Prometheus Operator documentation. 14.1. Creating a Prometheus service monitor Data Grid Operator automatically creates a Prometheus ServiceMonitor that scrapes metrics from your Data Grid cluster. Procedure Enable monitoring for user-defined projects on OpenShift Container Platform. When the Operator detects an Infinispan CR with the monitoring annotation set to true , which is the default, Data Grid Operator does the following: Creates a ServiceMonitor named <cluster_name>-monitor . Adds the infinispan.org/monitoring: 'true' annotation to your Infinispan CR metadata, if the value is not already explicitly set: apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/monitoring: 'true' Note To authenticate with Data Grid, Prometheus uses the operator credentials. Verification You can check that Prometheus is scraping Data Grid metrics as follows: In the OpenShift Web Console, select the </> Developer perspective and then select Monitoring . Open the Dashboard tab for the namespace where your Data Grid cluster runs. Open the Metrics tab and confirm that you can query Data Grid metrics such as: Additional resources Enabling monitoring for user-defined projects 14.1.1. Disabling the Prometheus service monitor You can disable the ServiceMonitor if you do not want Prometheus to scrape metrics for your Data Grid cluster. Procedure Set 'false' as the value for the infinispan.org/monitoring annotation in your Infinispan CR. apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/monitoring: 'false' Apply the changes. 14.1.2. Configuring Service Monitor Target Labels You can configure the generated ServiceMonitor to propagate Service labels to the underlying metrics using the ServiceMonitor spec.targetLabels field. Use the Service labels to filter and aggregate the metrics collected from the monitored endpoints. Procedure Define labels to apply to your service by setting the infinispan.org/targetLabels annotation in your Infinispan CR. Specify a comma-separated list of the labels required in your metrics using the infinispan.org/serviceMonitorTargetLabels annotation on your Infinispan CR. apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/targetLabels: "label1,label2,label3" infinispan.org/serviceMonitorTargetLabels: "label1,label2" Apply the changes. 14.2. Installing the Grafana Operator To support various needs, Data Grid Operator integrates with the community version of the Grafana Operator to create dashboards for Data Grid services. Until Grafana is integrated with OpenShift user workload monitoring, the only option is to rely on the community version. You can install the Grafana Operator on OpenShift from the OperatorHub and should create a subscription for the alpha channel. However, as is the policy for all Community Operators, Red Hat does not certify the Grafana Operator and does not provide support for it in combination with Data Grid. When you install the Grafana Operator you are prompted to acknowledge a warning about the community version before you can continue. 14.3. Creating Grafana data sources Create a GrafanaDatasource CR so you can visualize Data Grid metrics in Grafana dashboards. Prerequisites Have an oc client. Have cluster-admin access to OpenShift Container Platform. Enable monitoring for user-defined projects on OpenShift Container Platform. Install the Grafana Operator from the alpha channel and create a Grafana CR. Procedure Create a ServiceAccount that lets Grafana read Data Grid metrics from Prometheus. Apply the ServiceAccount . Grant cluster-monitoring-view permissions to the ServiceAccount . Create a Grafana data source. Retrieve the token for the ServiceAccount . Define a GrafanaDataSource that includes the token in the spec.datasources.secureJsonData.httpHeaderValue1 field, as in the following example: Apply the GrafanaDataSource . steps Enable Grafana dashboards with the Data Grid Operator configuration properties. 14.4. Configuring Data Grid dashboards Data Grid Operator provides global configuration properties that let you configure Grafana dashboards for Data Grid clusters. Note You can modify global configuration properties while Data Grid Operator is running. Prerequisites Data Grid Operator must watch the namespace where the Grafana Operator is running. Procedure Create a ConfigMap named infinispan-operator-config in the Data Grid Operator namespace. Specify the namespace of your Data Grid cluster with the data.grafana.dashboard.namespace property. Note Deleting the value for this property removes the dashboard. Changing the value moves the dashboard to that namespace. Specify a name for the dashboard with the data.grafana.dashboard.name property. If necessary, specify a monitoring key with the data.grafana.dashboard.monitoring.key property. Create infinispan-operator-config or update the configuration. Open the Grafana UI, which is available at: 14.5. Enabling JMX remote ports for Data Grid clusters Enable JMX remote ports to expose Data Grid MBeans and to integrate Data Grid with external monitoring systems such as Cryostat. When you enable JMX for Data Grid cluster, the following occurs: Each Data Grid server pod exposes an authenticated JMX endpoint on port 9999 utilizing the "admin" security-realm, which includes the Operator user credentials. The <cluster-name>-admin Service exposes port 9999 . Note You can enable or disable JMX only during the creation of the Infinispan CR. Once the CR instance is created, you cannot modify the JMX settings. Procedure Enable JMX in your Infinispan CR. Retrieve the Operator user credentials to authenticate client JMX connections. Additional resources Enabling JMX statistics 14.6. Setting up JFR recordings with Cryostat Enable JDK Flight Recorder (JFR) monitoring for your Data Grid clusters that run on OpenShift. JFR recordings with Cryostat JFR provides insights into various aspects of JVM performance to ease cluster inspection and debugging. Depending on your requirements, you can store and analyze your recordings using the integrated tools provided by Cryostat or export the recordings to an external monitoring application. Prerequisites Install the Cryostat Operator. You can install the Cryostat Operator in your OpenShift project by using Operator Lifecycle Manager (OLM). Have JMX enabled on your Data Grid cluster. You must enable JMX before deploying the cluster, as JMX settings cannot be modified after deployment. Procedure Create a Cryostat CR in the same namespace as your Infinispan CR. Note The Cryostat Operator requires cert-manager for traffic encryption. If the cert-manager is enabled but not installed, the deployment fails. For details, see the Installing Cryostat guide. Wait for the Cryostat CR to be ready. Open the Cryostat status.applicationUrl . Retrieve the Operator user credentials to authenticate client JMX connections in the Cryostat UI. In the Cryostat UI, navigate to the Security menu. In the Store Credentials window, click the Add button. The Store Credentials window opens. In the Match Expression filed, enter match expression details in the following format: Additional resources Installing Cryostat Configuring Cryostat Credentials Enabling JMX remote ports for Data Grid clusters
[ "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/monitoring: 'true'", "vendor_cache_manager_default_cluster_size", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/monitoring: 'false'", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/targetLabels: \"label1,label2,label3\" infinispan.org/serviceMonitorTargetLabels: \"label1,label2\"", "apiVersion: v1 kind: ServiceAccount metadata: name: infinispan-monitoring", "apply -f service-account.yaml", "adm policy add-cluster-role-to-user cluster-monitoring-view -z infinispan-monitoring", "serviceaccounts get-token infinispan-monitoring", "apiVersion: integreatly.org/v1alpha1 kind: GrafanaDataSource metadata: name: grafanadatasource spec: name: datasource.yaml datasources: - access: proxy editable: true isDefault: true jsonData: httpHeaderName1: Authorization timeInterval: 5s tlsSkipVerify: true name: Prometheus secureJsonData: httpHeaderValue1: >- Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Imc4O type: prometheus url: 'https://thanos-querier.openshift-monitoring.svc.cluster.local:9091'", "apply -f grafana-datasource.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: infinispan-operator-config data: grafana.dashboard.namespace: infinispan grafana.dashboard.name: infinispan grafana.dashboard.monitoring.key: middleware", "apply -f infinispan-operator-config.yaml", "get routes grafana-route -o jsonpath=https://\"{.spec.host}\"", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: jmx: enabled: true", "get secret infinispan-generated-operator-secret -o jsonpath=\"{.data.identities\\.yaml}\" | base64 --decode", "apiVersion: operator.cryostat.io/v1beta1 kind: Cryostat metadata: name: cryostat-sample spec: minimal: false enableCertManager: true", "wait -n <namespace> --for=condition=MainDeploymentAvailable cryostat/cryostat-sample", "-n <namespace> get cryostat cryostat-sample", "get secret infinispan-generated-operator-secret -o jsonpath=\"{.data.identities\\.yaml}\" | base64 --decode", "target.labels['infinispan_cr'] == '<cluster_name>'" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/monitoring-services
1.3. Performance Features and Improvements
1.3. Performance Features and Improvements CPU/Kernel NUMA - Non-Uniform Memory Access. See Chapter 9, NUMA for details on NUMA. CFS - Completely Fair Scheduler. A modern class-focused scheduler. RCU - Read Copy Update. Better handling of shared thread data. Up to 160 virtual CPUs (vCPUs). Memory Huge Pages and other optimizations for memory-intensive environments. See Chapter 8, Memory for details. Networking vhost-net - a fast, kernel-based VirtIO solution. SR-IOV - for near-native networking performance levels. Block I/O AIO - Support for a thread to overlap other I/O operations. MSI - PCI bus device interrupt generation. Scatter Gather - An improved I/O mode for data buffer handling. Note For more details on virtualization support, limits, and features, refer to the Red Hat Enterprise Linux 6 Virtualization Getting Started Guide and the following URLs: https://access.redhat.com/certified-hypervisors https://access.redhat.com/articles/rhel-kvm-limits
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-introduction-6_improvements
Chapter 2. Configuring an IBM Cloud account
Chapter 2. Configuring an IBM Cloud account Before you can install OpenShift Container Platform, you must configure an IBM Cloud(R) account. 2.1. Prerequisites You have an IBM Cloud(R) account with a subscription. You cannot install OpenShift Container Platform on a free or trial IBM Cloud(R) account. 2.2. Quotas and limits on IBM Cloud The OpenShift Container Platform cluster uses a number of IBM Cloud(R) components, and the default quotas and limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain regions, or run multiple clusters from your account, you might need to request additional resources for your IBM Cloud(R) account. For a comprehensive list of the default IBM Cloud(R) quotas and service limits, see IBM Cloud(R)'s documentation for Quotas and service limits . Virtual Private Cloud (VPC) Each OpenShift Container Platform cluster creates its own VPC. The default quota of VPCs per region is 10 and will allow 10 clusters. To have more than 10 clusters in a single region, you must increase this quota. Application load balancer By default, each cluster creates three application load balancers (ALBs): Internal load balancer for the master API server External load balancer for the master API server Load balancer for the router You can create additional LoadBalancer service objects to create additional ALBs. The default quota of VPC ALBs are 50 per region. To have more than 50 ALBs, you must increase this quota. VPC ALBs are supported. Classic ALBs are not supported for IBM Cloud(R). Floating IP address By default, the installation program distributes control plane and compute machines across all availability zones within a region to provision the cluster in a highly available configuration. In each availability zone, a public gateway is created and requires a separate floating IP address. The default quota for a floating IP address is 20 addresses per availability zone. The default cluster configuration yields three floating IP addresses: Two floating IP addresses in the us-east-1 primary zone. The IP address associated with the bootstrap node is removed after installation. One floating IP address in the us-east-2 secondary zone. One floating IP address in the us-east-3 secondary zone. IBM Cloud(R) can support up to 19 clusters per region in an account. If you plan to have more than 19 default clusters, you must increase this quota. Virtual Server Instances (VSI) By default, a cluster creates VSIs using bx2-4x16 profiles, which includes the following resources by default: 4 vCPUs 16 GB RAM The following nodes are created: One bx2-4x16 bootstrap machine, which is removed after the installation is complete Three bx2-4x16 control plane nodes Three bx2-4x16 compute nodes For more information, see IBM Cloud(R)'s documentation on supported profiles . Table 2.1. VSI component quotas and limits VSI component Default IBM Cloud(R) quota Default cluster configuration Maximum number of clusters vCPU 200 vCPUs per region 28 vCPUs, or 24 vCPUs after bootstrap removal 8 per region RAM 1600 GB per region 112 GB, or 96 GB after bootstrap removal 16 per region Storage 18 TB per region 1050 GB, or 900 GB after bootstrap removal 19 per region If you plan to exceed the resources stated in the table, you must increase your IBM Cloud(R) account quota. Block Storage Volumes For each VPC machine, a block storage device is attached for its boot volume. The default cluster configuration creates seven VPC machines, resulting in seven block storage volumes. Additional Kubernetes persistent volume claims (PVCs) of the IBM Cloud(R) storage class create additional block storage volumes. The default quota of VPC block storage volumes are 300 per region. To have more than 300 volumes, you must increase this quota. 2.3. Configuring DNS resolution How you configure DNS resolution depends on the type of OpenShift Container Platform cluster you are installing: If you are installing a public cluster, you use IBM Cloud Internet Services (CIS). If you are installing a private cluster, you use IBM Cloud(R) DNS Services (DNS Services) 2.3.1. Using IBM Cloud Internet Services for DNS resolution The installation program uses IBM Cloud(R) Internet Services (CIS) to configure cluster DNS resolution and provide name lookup for a public cluster. Note This offering does not support IPv6, so dual stack or IPv6 environments are not possible. You must create a domain zone in CIS in the same account as your cluster. You must also ensure the zone is authoritative for the domain. You can do this using a root domain or subdomain. Prerequisites You have installed the IBM Cloud(R) CLI . You have an existing domain and registrar. For more information, see the IBM(R) documentation . Procedure Create a CIS instance to use with your cluster: Install the CIS plugin: USD ibmcloud plugin install cis Create the CIS instance: USD ibmcloud cis instance-create <instance_name> standard- 1 1 At a minimum, you require a Standard plan for CIS to manage the cluster subdomain and its DNS records. Note After you have configured your registrar or DNS provider, it can take up to 24 hours for the changes to take effect. Connect an existing domain to your CIS instance: Set the context instance for CIS: USD ibmcloud cis instance-set <instance_name> 1 1 The instance cloud resource name. Add the domain for CIS: USD ibmcloud cis domain-add <domain_name> 1 1 The fully qualified domain name. You can use either the root domain or subdomain value as the domain name, depending on which you plan to configure. Note A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Open the CIS web console , navigate to the Overview page, and note your CIS name servers. These name servers will be used in the step. Configure the name servers for your domains or subdomains at the domain's registrar or DNS provider. For more information, see the IBM Cloud(R) documentation . 2.3.2. Using IBM Cloud DNS Services for DNS resolution The installation program uses IBM Cloud(R) DNS Services to configure cluster DNS resolution and provide name lookup for a private cluster. You configure DNS resolution by creating a DNS services instance for the cluster, and then adding a DNS zone to the DNS Services instance. Ensure that the zone is authoritative for the domain. You can do this using a root domain or subdomain. Note IBM Cloud(R) does not support IPv6, so dual stack or IPv6 environments are not possible. Prerequisites You have installed the IBM Cloud(R) CLI . You have an existing domain and registrar. For more information, see the IBM(R) documentation . Procedure Create a DNS Services instance to use with your cluster: Install the DNS Services plugin by running the following command: USD ibmcloud plugin install cloud-dns-services Create the DNS Services instance by running the following command: USD ibmcloud dns instance-create <instance-name> standard-dns 1 1 At a minimum, you require a Standard DNS plan for DNS Services to manage the cluster subdomain and its DNS records. Note After you have configured your registrar or DNS provider, it can take up to 24 hours for the changes to take effect. Create a DNS zone for the DNS Services instance: Set the target operating DNS Services instance by running the following command: USD ibmcloud dns instance-target <instance-name> Add the DNS zone to the DNS Services instance by running the following command: USD ibmcloud dns zone-create <zone-name> 1 1 The fully qualified zone name. You can use either the root domain or subdomain value as the zone name, depending on which you plan to configure. A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Record the name of the DNS zone you have created. As part of the installation process, you must update the install-config.yaml file before deploying the cluster. Use the name of the DNS zone as the value for the baseDomain parameter. Note You do not have to manage permitted networks or configure an "A" DNS resource record. As required, the installation program configures these resources automatically. 2.4. IBM Cloud IAM Policies and API Key To install OpenShift Container Platform into your IBM Cloud(R) account, the installation program requires an IAM API key, which provides authentication and authorization to access IBM Cloud(R) service APIs. You can use an existing IAM API key that contains the required policies or create a new one. For an IBM Cloud(R) IAM overview, see the IBM Cloud(R) documentation . 2.4.1. Required access policies You must assign the required access policies to your IBM Cloud(R) account. Table 2.2. Required access policies Service type Service Access policy scope Platform access Service access Account management IAM Identity Service All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Service ID creator Account management [2] Identity and Access Management All resources Editor, Operator, Viewer, Administrator Account management Resource group only All resource groups in the account Administrator IAM services Cloud Object Storage All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager, Content Reader, Object Reader, Object Writer IAM services Internet Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager IAM services DNS Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager IAM services VPC Infrastructure Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager The policy access scope should be set based on how granular you want to assign access. The scope can be set to All resources or Resources based on selected attributes . Optional: This access policy is only required if you want the installation program to create a resource group. For more information about resource groups, see the IBM(R) documentation . 2.4.2. Access policy assignment In IBM Cloud(R) IAM, access policies can be attached to different subjects: Access group (Recommended) Service ID User The recommended method is to define IAM access policies in an access group . This helps organize all the access required for OpenShift Container Platform and enables you to onboard users and service IDs to this group. You can also assign access to users and service IDs directly, if desired. 2.4.3. Creating an API key You must create a user API key or a service ID API key for your IBM Cloud(R) account. Prerequisites You have assigned the required access policies to your IBM Cloud(R) account. You have attached you IAM access policies to an access group, or other appropriate resource. Procedure Create an API key, depending on how you defined your IAM access policies. For example, if you assigned your access policies to a user, you must create a user API key . If you assigned your access policies to a service ID, you must create a service ID API key . If your access policies are assigned to an access group, you can use either API key type. For more information on IBM Cloud(R) API keys, see Understanding API keys . 2.5. Supported IBM Cloud regions You can deploy an OpenShift Container Platform cluster to the following regions: au-syd (Sydney, Australia) br-sao (Sao Paulo, Brazil) ca-tor (Toronto, Canada) eu-de (Frankfurt, Germany) eu-gb (London, United Kingdom) eu-es (Madrid, Spain) jp-osa (Osaka, Japan) jp-tok (Tokyo, Japan) us-east (Washington DC, United States) us-south (Dallas, United States) Note Deploying your cluster in the eu-es (Madrid, Spain) region is not supported for OpenShift Container Platform 4.14.6 and earlier versions. 2.6. steps Configuring IAM for IBM Cloud(R)
[ "ibmcloud plugin install cis", "ibmcloud cis instance-create <instance_name> standard-next 1", "ibmcloud cis instance-set <instance_name> 1", "ibmcloud cis domain-add <domain_name> 1", "ibmcloud plugin install cloud-dns-services", "ibmcloud dns instance-create <instance-name> standard-dns 1", "ibmcloud dns instance-target <instance-name>", "ibmcloud dns zone-create <zone-name> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_cloud/installing-ibm-cloud-account
Chapter 4. HorizontalPodAutoscaler [autoscaling/v2]
Chapter 4. HorizontalPodAutoscaler [autoscaling/v2] Description HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler. status object HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler. 4.1.1. .spec Description HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler. Type object Required scaleTargetRef maxReplicas Property Type Description behavior object HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). maxReplicas integer maxReplicas is the upper limit for the number of replicas to which the autoscaler can scale up. It cannot be less that minReplicas. metrics array metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used). The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods. Ergo, metrics used must decrease as the pod count is increased, and vice-versa. See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization. metrics[] object MetricSpec specifies how to scale based on a single metric (only type and one other matching field should be set at once). minReplicas integer minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down. It defaults to 1 pod. minReplicas is allowed to be 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured. Scaling is active as long as at least one metric value is available. scaleTargetRef object CrossVersionObjectReference contains enough information to let you identify the referred resource. 4.1.2. .spec.behavior Description HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). Type object Property Type Description scaleDown object HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. scaleUp object HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. 4.1.3. .spec.behavior.scaleDown Description HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. Type object Property Type Description policies array policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid policies[] object HPAScalingPolicy is a single policy which must hold true for a specified past interval. selectPolicy string selectPolicy is used to specify which policy should be used. If not set, the default value Max is used. stabilizationWindowSeconds integer stabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long). 4.1.4. .spec.behavior.scaleDown.policies Description policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid Type array 4.1.5. .spec.behavior.scaleDown.policies[] Description HPAScalingPolicy is a single policy which must hold true for a specified past interval. Type object Required type value periodSeconds Property Type Description periodSeconds integer periodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min). type string type is used to specify the scaling policy. value integer value contains the amount of change which is permitted by the policy. It must be greater than zero 4.1.6. .spec.behavior.scaleUp Description HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. Type object Property Type Description policies array policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid policies[] object HPAScalingPolicy is a single policy which must hold true for a specified past interval. selectPolicy string selectPolicy is used to specify which policy should be used. If not set, the default value Max is used. stabilizationWindowSeconds integer stabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long). 4.1.7. .spec.behavior.scaleUp.policies Description policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid Type array 4.1.8. .spec.behavior.scaleUp.policies[] Description HPAScalingPolicy is a single policy which must hold true for a specified past interval. Type object Required type value periodSeconds Property Type Description periodSeconds integer periodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min). type string type is used to specify the scaling policy. value integer value contains the amount of change which is permitted by the policy. It must be greater than zero 4.1.9. .spec.metrics Description metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used). The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods. Ergo, metrics used must decrease as the pod count is increased, and vice-versa. See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization. Type array 4.1.10. .spec.metrics[] Description MetricSpec specifies how to scale based on a single metric (only type and one other matching field should be set at once). Type object Required type Property Type Description containerResource object ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. external object ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster). object object ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). pods object PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value. resource object ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. type string type is the type of metric source. It should be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each mapping to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled 4.1.11. .spec.metrics[].containerResource Description ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. Type object Required name target container Property Type Description container string container is the name of the container in the pods of the scaling target name string name is the name of the resource in question. target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.12. .spec.metrics[].containerResource.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.13. .spec.metrics[].external Description ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster). Type object Required metric target Property Type Description metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.14. .spec.metrics[].external.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.15. .spec.metrics[].external.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.16. .spec.metrics[].object Description ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). Type object Required describedObject target metric Property Type Description describedObject object CrossVersionObjectReference contains enough information to let you identify the referred resource. metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.17. .spec.metrics[].object.describedObject Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string apiVersion is the API version of the referent kind string kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 4.1.18. .spec.metrics[].object.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.19. .spec.metrics[].object.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.20. .spec.metrics[].pods Description PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value. Type object Required metric target Property Type Description metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.21. .spec.metrics[].pods.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.22. .spec.metrics[].pods.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.23. .spec.metrics[].resource Description ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. Type object Required name target Property Type Description name string name is the name of the resource in question. target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.24. .spec.metrics[].resource.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.25. .spec.scaleTargetRef Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string apiVersion is the API version of the referent kind string kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 4.1.26. .status Description HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler. Type object Required desiredReplicas Property Type Description conditions array conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met. conditions[] object HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point. currentMetrics array currentMetrics is the last read state of the metrics used by this autoscaler. currentMetrics[] object MetricStatus describes the last-read state of a single metric. currentReplicas integer currentReplicas is current number of replicas of pods managed by this autoscaler, as last seen by the autoscaler. desiredReplicas integer desiredReplicas is the desired number of replicas of pods managed by this autoscaler, as last calculated by the autoscaler. lastScaleTime Time lastScaleTime is the last time the HorizontalPodAutoscaler scaled the number of pods, used by the autoscaler to control how often the number of pods is changed. observedGeneration integer observedGeneration is the most recent generation observed by this autoscaler. 4.1.27. .status.conditions Description conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met. Type array 4.1.28. .status.conditions[] Description HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point. Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another message string message is a human-readable explanation containing details about the transition reason string reason is the reason for the condition's last transition. status string status is the status of the condition (True, False, Unknown) type string type describes the current condition 4.1.29. .status.currentMetrics Description currentMetrics is the last read state of the metrics used by this autoscaler. Type array 4.1.30. .status.currentMetrics[] Description MetricStatus describes the last-read state of a single metric. Type object Required type Property Type Description containerResource object ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. external object ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object. object object ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). pods object PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second). resource object ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. type string type is the type of metric source. It will be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each corresponds to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled 4.1.31. .status.currentMetrics[].containerResource Description ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Type object Required name current container Property Type Description container string container is the name of the container in the pods of the scaling target current object MetricValueStatus holds the current value for a metric name string name is the name of the resource in question. 4.1.32. .status.currentMetrics[].containerResource.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.33. .status.currentMetrics[].external Description ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object. Type object Required metric current Property Type Description current object MetricValueStatus holds the current value for a metric metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.34. .status.currentMetrics[].external.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.35. .status.currentMetrics[].external.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.36. .status.currentMetrics[].object Description ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). Type object Required metric current describedObject Property Type Description current object MetricValueStatus holds the current value for a metric describedObject object CrossVersionObjectReference contains enough information to let you identify the referred resource. metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.37. .status.currentMetrics[].object.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.38. .status.currentMetrics[].object.describedObject Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string apiVersion is the API version of the referent kind string kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 4.1.39. .status.currentMetrics[].object.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.40. .status.currentMetrics[].pods Description PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second). Type object Required metric current Property Type Description current object MetricValueStatus holds the current value for a metric metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.41. .status.currentMetrics[].pods.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.42. .status.currentMetrics[].pods.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.43. .status.currentMetrics[].resource Description ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Type object Required name current Property Type Description current object MetricValueStatus holds the current value for a metric name string name is the name of the resource in question. 4.1.44. .status.currentMetrics[].resource.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.2. API endpoints The following API endpoints are available: /apis/autoscaling/v2/horizontalpodautoscalers GET : list or watch objects of kind HorizontalPodAutoscaler /apis/autoscaling/v2/watch/horizontalpodautoscalers GET : watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers DELETE : delete collection of HorizontalPodAutoscaler GET : list or watch objects of kind HorizontalPodAutoscaler POST : create a HorizontalPodAutoscaler /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers GET : watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name} DELETE : delete a HorizontalPodAutoscaler GET : read the specified HorizontalPodAutoscaler PATCH : partially update the specified HorizontalPodAutoscaler PUT : replace the specified HorizontalPodAutoscaler /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers/{name} GET : watch changes to an object of kind HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status GET : read status of the specified HorizontalPodAutoscaler PATCH : partially update status of the specified HorizontalPodAutoscaler PUT : replace status of the specified HorizontalPodAutoscaler 4.2.1. /apis/autoscaling/v2/horizontalpodautoscalers HTTP method GET Description list or watch objects of kind HorizontalPodAutoscaler Table 4.1. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscalerList schema 401 - Unauthorized Empty 4.2.2. /apis/autoscaling/v2/watch/horizontalpodautoscalers HTTP method GET Description watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. Table 4.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers HTTP method DELETE Description delete collection of HorizontalPodAutoscaler Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind HorizontalPodAutoscaler Table 4.5. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscalerList schema 401 - Unauthorized Empty HTTP method POST Description create a HorizontalPodAutoscaler Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 202 - Accepted HorizontalPodAutoscaler schema 401 - Unauthorized Empty 4.2.4. /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers HTTP method GET Description watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. Table 4.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name} Table 4.10. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler HTTP method DELETE Description delete a HorizontalPodAutoscaler Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HorizontalPodAutoscaler Table 4.13. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HorizontalPodAutoscaler Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HorizontalPodAutoscaler Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.17. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty 4.2.6. /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers/{name} Table 4.19. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler HTTP method GET Description watch changes to an object of kind HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.7. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status Table 4.21. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler HTTP method GET Description read status of the specified HorizontalPodAutoscaler Table 4.22. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HorizontalPodAutoscaler Table 4.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.24. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HorizontalPodAutoscaler Table 4.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.26. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.27. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/autoscale_apis/horizontalpodautoscaler-autoscaling-v2
function::module_name
function::module_name Name function::module_name - The module name of the current script Synopsis Arguments None Description This function returns the name of the stap module. Either generated randomly (stap_[0-9a-f]+_[0-9a-f]+) or set by stap -m <module_name>.
[ "module_name:string()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-module-name
B.3. Checking a Package's Signature
B.3. Checking a Package's Signature If you want to verify that a package has not been corrupted or tampered with, examine only the md5sum by typing the following command at a shell prompt (where <rpm_file> is the file name of the RPM package): The message <rpm_file> : rsa sha1 (md5) pgp md5 OK (specifically the OK part of it) is displayed. This brief message means that the file was not corrupted during download. To see a more verbose message, replace -K with -Kvv in the command. On the other hand, how trustworthy is the developer who created the package? If the package is signed with the developer's GnuPG key , you know that the developer really is who they say they are. An RPM package can be signed using GNU Privacy Guard (or GnuPG), to help you make certain your downloaded package is trustworthy. GnuPG is a tool for secure communication; it is a complete and free replacement for the encryption technology of PGP, an electronic privacy program. With GnuPG, you can authenticate the validity of documents and encrypt/decrypt data to and from other recipients. GnuPG is capable of decrypting and verifying PGP 5. x files as well. During installation, GnuPG is installed by default. That way you can immediately start using GnuPG to verify any packages that you receive from Red Hat. Before doing so, you must first import Red Hat's public key. B.3.1. Importing Keys To verify Red Hat packages, you must import the Red Hat GnuPG key. To do so, execute the following command at a shell prompt: To display a list of all keys installed for RPM verification, execute the command: For the Red Hat key, the output includes: To display details about a specific key, use rpm -qi followed by the output from the command:
[ "-K --nosignature <rpm_file>", "--import /usr/share/rhn/RPM-GPG-KEY", "-qa gpg-pubkey*", "gpg-pubkey-db42a60e-37ea5438", "-qi gpg-pubkey-db42a60e-37ea5438" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-check-rpm-sig
Chapter 6. PersistentVolumeClaim [v1]
Chapter 6. PersistentVolumeClaim [v1] Description PersistentVolumeClaim is a user's request for and claim to a persistent volume Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes status object PersistentVolumeClaimStatus is the current status of a persistent volume claim. 6.1.1. .spec Description PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object VolumeResourceRequirements describes the storage resource requirements for a volume. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 6.1.2. .spec.dataSource Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 6.1.3. .spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 6.1.4. .spec.resources Description VolumeResourceRequirements describes the storage resource requirements for a volume. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 6.1.5. .status Description PersistentVolumeClaimStatus is the current status of a persistent volume claim. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources object (Quantity) allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity object (Quantity) capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc currentVolumeAttributesClassName string currentVolumeAttributesClassName is the current name of the VolumeAttributesClass the PVC is using. When unset, there is no VolumeAttributeClass applied to this PersistentVolumeClaim This is an alpha field and requires enabling VolumeAttributesClass feature. modifyVolumeStatus object ModifyVolumeStatus represents the status object of ControllerModifyVolume operation phase string phase represents the current phase of PersistentVolumeClaim. Possible enum values: - "Bound" used for PersistentVolumeClaims that are bound - "Lost" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - "Pending" used for PersistentVolumeClaims that are not yet bound 6.1.6. .status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'. Type array 6.1.7. .status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required type status Property Type Description lastProbeTime Time lastProbeTime is the time we probed the condition. lastTransitionTime Time lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "Resizing" that means the underlying persistent volume is being resized. status string type string 6.1.8. .status.modifyVolumeStatus Description ModifyVolumeStatus represents the status object of ControllerModifyVolume operation Type object Required status Property Type Description status string status is the status of the ControllerModifyVolume operation. It can be in any of following states: - Pending Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing. - InProgress InProgress indicates that the volume is being modified. - Infeasible Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified. Note: New statuses can be added in the future. Consumers should check for unknown statuses and fail appropriately. Possible enum values: - "InProgress" InProgress indicates that the volume is being modified - "Infeasible" Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified - "Pending" Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing targetVolumeAttributesClassName string targetVolumeAttributesClassName is the name of the VolumeAttributesClass the PVC currently being reconciled 6.2. API endpoints The following API endpoints are available: /api/v1/persistentvolumeclaims GET : list or watch objects of kind PersistentVolumeClaim /api/v1/watch/persistentvolumeclaims GET : watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/persistentvolumeclaims DELETE : delete collection of PersistentVolumeClaim GET : list or watch objects of kind PersistentVolumeClaim POST : create a PersistentVolumeClaim /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims GET : watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name} DELETE : delete a PersistentVolumeClaim GET : read the specified PersistentVolumeClaim PATCH : partially update the specified PersistentVolumeClaim PUT : replace the specified PersistentVolumeClaim /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims/{name} GET : watch changes to an object of kind PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name}/status GET : read status of the specified PersistentVolumeClaim PATCH : partially update status of the specified PersistentVolumeClaim PUT : replace status of the specified PersistentVolumeClaim 6.2.1. /api/v1/persistentvolumeclaims HTTP method GET Description list or watch objects of kind PersistentVolumeClaim Table 6.1. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaimList schema 401 - Unauthorized Empty 6.2.2. /api/v1/watch/persistentvolumeclaims HTTP method GET Description watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. Table 6.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /api/v1/namespaces/{namespace}/persistentvolumeclaims HTTP method DELETE Description delete collection of PersistentVolumeClaim Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PersistentVolumeClaim Table 6.5. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaimList schema 401 - Unauthorized Empty HTTP method POST Description create a PersistentVolumeClaim Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body PersistentVolumeClaim schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 202 - Accepted PersistentVolumeClaim schema 401 - Unauthorized Empty 6.2.4. /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims HTTP method GET Description watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. Table 6.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.5. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name} Table 6.10. Global path parameters Parameter Type Description name string name of the PersistentVolumeClaim HTTP method DELETE Description delete a PersistentVolumeClaim Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.12. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 202 - Accepted PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method GET Description read the specified PersistentVolumeClaim Table 6.13. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PersistentVolumeClaim Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.15. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PersistentVolumeClaim Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body PersistentVolumeClaim schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty 6.2.6. /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims/{name} Table 6.19. Global path parameters Parameter Type Description name string name of the PersistentVolumeClaim HTTP method GET Description watch changes to an object of kind PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.7. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name}/status Table 6.21. Global path parameters Parameter Type Description name string name of the PersistentVolumeClaim HTTP method GET Description read status of the specified PersistentVolumeClaim Table 6.22. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PersistentVolumeClaim Table 6.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.24. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PersistentVolumeClaim Table 6.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.26. Body parameters Parameter Type Description body PersistentVolumeClaim schema Table 6.27. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/storage_apis/persistentvolumeclaim-v1
Appendix H. Examples using the Secure Token Service APIs
Appendix H. Examples using the Secure Token Service APIs These examples are using Python's boto3 module to interface with the Ceph Object Gateway's implementation of the Secure Token Service (STS). In these examples, TESTER2 assumes a role created by TESTER1 , as to access S3 resources owned by TESTER1 based on the permission policy attached to the role. The AssumeRole example creates a role, assigns a policy to the role, then assumes a role to get temporary credentials and access to S3 resources using those temporary credentials. The AssumeRoleWithWebIdentity example authenticates users using an external application with Keycloak, an OpenID Connect identity provider, assumes a role to get temporary credentials and access S3 resources according to the permission policy of the role. AssumeRole Example import boto3 iam_client = boto3.client('iam', aws_access_key_id= ACCESS_KEY_OF_TESTER1 , aws_secret_access_key= SECRET_KEY_OF_TESTER1 , endpoint_url=<IAM URL>, region_name='' ) policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER1\"]},\"Action\":[\"sts:AssumeRole\"]}]}" role_response = iam_client.create_role( AssumeRolePolicyDocument=policy_document, Path='/', RoleName='S3Access', ) role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}" response = iam_client.put_role_policy( RoleName='S3Access', PolicyName='Policy1', PolicyDocument=role_policy ) sts_client = boto3.client('sts', aws_access_key_id= ACCESS_KEY_OF_TESTER2 , aws_secret_access_key= SECRET_KEY_OF_TESTER2 , endpoint_url=<STS URL>, region_name='', ) response = sts_client.assume_role( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600 ) s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=<S3 URL>, region_name='',) bucket_name = 'my-bucket' s3bucket = s3client.create_bucket(Bucket=bucket_name) resp = s3client.list_buckets() AssumeRoleWithWebIdentity Example import boto3 iam_client = boto3.client('iam', aws_access_key_id= ACCESS_KEY_OF_TESTER1 , aws_secret_access_key= SECRET_KEY_OF_TESTER1 , endpoint_url=<IAM URL>, region_name='' ) oidc_response = iam_client.create_open_id_connect_provider( Url=<URL of the OpenID Connect Provider>, ClientIDList=[ <Client id registered with the IDP> ], ThumbprintList=[ <IDP THUMBPRINT> ] ) policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"Federated\":\[\"arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/demo\"\]\},\"Action\":\[\"sts:AssumeRoleWithWebIdentity\"\],\"Condition\":\{\"StringEquals\":\{\"localhost:8080/auth/realms/demo:app_id\":\"customer-portal\"\}\}\}\]\}" role_response = iam_client.create_role( AssumeRolePolicyDocument=policy_document, Path='/', RoleName='S3Access', ) role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}" response = iam_client.put_role_policy( RoleName='S3Access', PolicyName='Policy1', PolicyDocument=role_policy ) sts_client = boto3.client('sts', aws_access_key_id= ACCESS_KEY_OF_TESTER2 , aws_secret_access_key= SECRET_KEY_OF_TESTER2 , endpoint_url=<STS URL>, region_name='', ) response = sts_client.assume_role_with_web_identity( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600, WebIdentityToken=<Web Token> ) s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=<S3 URL>, region_name='',) bucket_name = 'my-bucket' s3bucket = s3client.create_bucket(Bucket=bucket_name) resp = s3client.list_buckets() Additional Resources See the Test S3 Access section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details on using Python's boto module.
[ "import boto3 iam_client = boto3.client('iam', aws_access_key_id= ACCESS_KEY_OF_TESTER1 , aws_secret_access_key= SECRET_KEY_OF_TESTER1 , endpoint_url=<IAM URL>, region_name='' ) policy_document = \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER1\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" role_response = iam_client.create_role( AssumeRolePolicyDocument=policy_document, Path='/', RoleName='S3Access', ) role_policy = \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\\"s3:*\\\",\\\"Resource\\\":\\\"arn:aws:s3:::*\\\"}}\" response = iam_client.put_role_policy( RoleName='S3Access', PolicyName='Policy1', PolicyDocument=role_policy ) sts_client = boto3.client('sts', aws_access_key_id= ACCESS_KEY_OF_TESTER2 , aws_secret_access_key= SECRET_KEY_OF_TESTER2 , endpoint_url=<STS URL>, region_name='', ) response = sts_client.assume_role( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600 ) s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=<S3 URL>, region_name='',) bucket_name = 'my-bucket' s3bucket = s3client.create_bucket(Bucket=bucket_name) resp = s3client.list_buckets()", "import boto3 iam_client = boto3.client('iam', aws_access_key_id= ACCESS_KEY_OF_TESTER1 , aws_secret_access_key= SECRET_KEY_OF_TESTER1 , endpoint_url=<IAM URL>, region_name='' ) oidc_response = iam_client.create_open_id_connect_provider( Url=<URL of the OpenID Connect Provider>, ClientIDList=[ <Client id registered with the IDP> ], ThumbprintList=[ <IDP THUMBPRINT> ] ) policy_document = \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"Federated\\\":\\[\\\"arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/demo\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRoleWithWebIdentity\\\"\\],\\\"Condition\\\":\\{\\\"StringEquals\\\":\\{\\\"localhost:8080/auth/realms/demo:app_id\\\":\\\"customer-portal\\\"\\}\\}\\}\\]\\}\" role_response = iam_client.create_role( AssumeRolePolicyDocument=policy_document, Path='/', RoleName='S3Access', ) role_policy = \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\\"s3:*\\\",\\\"Resource\\\":\\\"arn:aws:s3:::*\\\"}}\" response = iam_client.put_role_policy( RoleName='S3Access', PolicyName='Policy1', PolicyDocument=role_policy ) sts_client = boto3.client('sts', aws_access_key_id= ACCESS_KEY_OF_TESTER2 , aws_secret_access_key= SECRET_KEY_OF_TESTER2 , endpoint_url=<STS URL>, region_name='', ) response = sts_client.assume_role_with_web_identity( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600, WebIdentityToken=<Web Token> ) s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=<S3 URL>, region_name='',) bucket_name = 'my-bucket' s3bucket = s3client.create_bucket(Bucket=bucket_name) resp = s3client.list_buckets()" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/developer_guide/examples-using-the-secure-token-service-apis_dev
Chapter 43. loadbalancer
Chapter 43. loadbalancer This chapter describes the commands under the loadbalancer command. 43.1. loadbalancer amphora configure Update the amphora agent configuration Usage: Table 43.1. Positional arguments Value Summary <amphora-id> Uuid of the amphora to configure. Table 43.2. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete. 43.2. loadbalancer amphora delete Delete a amphora Usage: Table 43.3. Positional arguments Value Summary <amphora-id> Uuid of the amphora to delete. Table 43.4. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete. 43.3. loadbalancer amphora failover Force failover an amphora Usage: Table 43.5. Positional arguments Value Summary <amphora-id> Uuid of the amphora. Table 43.6. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete. 43.4. loadbalancer amphora list List amphorae Usage: Table 43.7. Command arguments Value Summary -h, --help Show this help message and exit --loadbalancer <loadbalancer> Filter by load balancer (name or id). --compute-id <compute-id> Filter by compute id. --role {BACKUP,MASTER,STANDALONE} Filter by role. --status {ALLOCATED,BOOTING,DELETED,ERROR,PENDING_CREATE,PENDING_DELETE,READY}, --provisioning-status {ALLOCATED,BOOTING,DELETED,ERROR,PENDING_CREATE,PENDING_DELETE,READY} Filter by amphora provisioning status. --long Show additional fields. Table 43.8. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.9. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.10. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.5. loadbalancer amphora show Show the details of a single amphora Usage: Table 43.12. Positional arguments Value Summary <amphora-id> Uuid of the amphora. Table 43.13. Command arguments Value Summary -h, --help Show this help message and exit Table 43.14. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.15. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.16. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.17. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.6. loadbalancer amphora stats show Shows the current statistics for an amphora. Usage: Table 43.18. Positional arguments Value Summary <amphora-id> Uuid of the amphora. Table 43.19. Command arguments Value Summary -h, --help Show this help message and exit --listener <listener> Filter by listener (name or id). Table 43.20. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.21. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.22. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.23. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.7. loadbalancer availabilityzone create Create an octavia availability zone Usage: Table 43.24. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New availability zone name. --availabilityzoneprofile <availabilityzone_profile> Availability zone profile to add the az to (name or ID). --description <description> Set the availability zone description. --enable Enable the availability zone. --disable Disable the availability zone. Table 43.25. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.26. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.27. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.28. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.8. loadbalancer availabilityzone delete Delete an availability zone Usage: Table 43.29. Positional arguments Value Summary <availabilityzone> Name of the availability zone to delete. Table 43.30. Command arguments Value Summary -h, --help Show this help message and exit 43.9. loadbalancer availabilityzone list List availability zones Usage: Table 43.31. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List availability zones according to their name. --availabilityzoneprofile <availabilityzone_profile> List availability zones according to their az profile. --enable List enabled availability zones. --disable List disabled availability zones. Table 43.32. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.33. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.34. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.35. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.10. loadbalancer availabilityzone set Update an availability zone Usage: Table 43.36. Positional arguments Value Summary <availabilityzone> Name of the availability zone to update. Table 43.37. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Set the description of the availability zone. --enable Enable the availability zone. --disable Disable the availability zone. 43.11. loadbalancer availabilityzone show Show the details for a single availability zone Usage: Table 43.38. Positional arguments Value Summary <availabilityzone> Name of the availability zone. Table 43.39. Command arguments Value Summary -h, --help Show this help message and exit Table 43.40. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.41. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.42. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.43. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.12. loadbalancer availabilityzone unset Clear availability zone settings Usage: Table 43.44. Positional arguments Value Summary <availabilityzone> Name of the availability zone to update. Table 43.45. Command arguments Value Summary -h, --help Show this help message and exit --description Clear the availability zone description. 43.13. loadbalancer availabilityzoneprofile create Create an octavia availability zone profile Usage: Table 43.46. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New octavia availability zone profile name. --provider <provider name> Provider name for the availability zone profile. --availability-zone-data <availability_zone_data> The json string containing the availability zone metadata. Table 43.47. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.48. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.49. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.50. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.14. loadbalancer availabilityzoneprofile delete Delete an availability zone profile Usage: Table 43.51. Positional arguments Value Summary <availabilityzone_profile> Availability zone profile to delete (name or id). Table 43.52. Command arguments Value Summary -h, --help Show this help message and exit 43.15. loadbalancer availabilityzoneprofile list List availability zone profiles Usage: Table 43.53. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List availability zone profiles by profile name. --provider <provider_name> List availability zone profiles according to their provider. Table 43.54. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.55. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.56. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.57. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.16. loadbalancer availabilityzoneprofile set Update an availability zone profile Usage: Table 43.58. Positional arguments Value Summary <availabilityzone_profile> Name or uuid of the availability zone profile to update. Table 43.59. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the name of the availability zone profile. --provider <provider_name> Set the provider of the availability zone profile. --availability-zone-data <availability_zone_data> Set the availability zone data of the profile. 43.17. loadbalancer availabilityzoneprofile show Show the details of a single availability zone profile Usage: Table 43.60. Positional arguments Value Summary <availabilityzone_profile> Name or uuid of the availability zone profile to show. Table 43.61. Command arguments Value Summary -h, --help Show this help message and exit Table 43.62. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.63. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.64. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.65. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.18. loadbalancer create Create a load balancer Usage: Table 43.66. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New load balancer name. --description <description> Set load balancer description. --vip-address <vip_address> Set the vip ip address. --vip-qos-policy-id <vip_qos_policy_id> Set qos policy id for vip port. unset with none . --additional-vip subnet-id=<name-or-uuid>[,ip-address=<ip>] Expose an additional vip on the load balancer. this parameter can be provided more than once. --project <project> Project for the load balancer (name or id). --provider <provider> Provider name for the load balancer. --availability-zone <availability_zone> Availability zone for the load balancer. --enable Enable load balancer (default). --disable Disable load balancer. --flavor <flavor> The name or id of the flavor for the load balancer. --wait Wait for action to complete. --tag <tag> Tag to be added to the load balancer (repeat option to set multiple tags) --no-tag No tags associated with the load balancer Table 43.67. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.68. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.69. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.70. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. Table 43.71. VIP Network Value Summary At least one of the following arguments is required.--vip-port-id <vip_port_id> Set port for the load balancer (name or id). --vip-subnet-id <vip_subnet_id> Set subnet for the load balancer (name or id). --vip-network-id <vip_network_id> Set network for the load balancer (name or id). 43.19. loadbalancer delete Delete a load balancer Usage: Table 43.72. Positional arguments Value Summary <load_balancer> Load balancers to delete (name or id). Table 43.73. Command arguments Value Summary -h, --help Show this help message and exit --cascade Cascade the delete to all child elements of the load balancer. --wait Wait for action to complete. 43.20. loadbalancer failover Trigger load balancer failover Usage: Table 43.74. Positional arguments Value Summary <load_balancer> Name or uuid of the load balancer. Table 43.75. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete. 43.21. loadbalancer flavor create Create a octavia flavor Usage: Table 43.76. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New flavor name. --flavorprofile <flavor_profile> Flavor profile to add the flavor to (name or id). --description <description> Set flavor description. --enable Enable flavor. --disable Disable flavor. Table 43.77. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.78. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.79. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.80. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.22. loadbalancer flavor delete Delete a flavor Usage: Table 43.81. Positional arguments Value Summary <flavor> Flavor to delete (name or id). Table 43.82. Command arguments Value Summary -h, --help Show this help message and exit 43.23. loadbalancer flavor list List flavor Usage: Table 43.83. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List flavors according to their name. --flavorprofile <flavor_profile> List flavors according to their flavor profile. --enable List enabled flavors. --disable List disabled flavors. Table 43.84. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.85. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.86. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.87. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.24. loadbalancer flavor set Update a flavor Usage: Table 43.88. Positional arguments Value Summary <flavor> Name or uuid of the flavor to update. Table 43.89. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the name of the flavor. --description <description> Set flavor description. --enable Enable flavor. --disable Disable flavor. 43.25. loadbalancer flavor show Show the details for a single flavor Usage: Table 43.90. Positional arguments Value Summary <flavor> Name or uuid of the flavor. Table 43.91. Command arguments Value Summary -h, --help Show this help message and exit Table 43.92. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.93. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.94. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.95. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.26. loadbalancer flavor unset Clear flavor settings Usage: Table 43.96. Positional arguments Value Summary <flavor> Flavor to update (name or id). Table 43.97. Command arguments Value Summary -h, --help Show this help message and exit --description Clear the flavor description. 43.27. loadbalancer flavorprofile create Create a octavia flavor profile Usage: Table 43.98. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New octavia flavor profile name. --provider <provider name> Provider name for the flavor profile. --flavor-data <flavor_data> The json string containing the flavor metadata. Table 43.99. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.100. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.101. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.102. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.28. loadbalancer flavorprofile delete Delete a flavor profile Usage: Table 43.103. Positional arguments Value Summary <flavor_profile> Flavor profiles to delete (name or id). Table 43.104. Command arguments Value Summary -h, --help Show this help message and exit 43.29. loadbalancer flavorprofile list List flavor profile Usage: Table 43.105. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List flavor profiles by flavor profile name. --provider <provider_name> List flavor profiles according to their provider. Table 43.106. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.107. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.108. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.109. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.30. loadbalancer flavorprofile set Update a flavor profile Usage: Table 43.110. Positional arguments Value Summary <flavor_profile> Name or uuid of the flavor profile to update. Table 43.111. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the name of the flavor profile. --provider <provider_name> Set the provider of the flavor profile. --flavor-data <flavor_data> Set the flavor data of the flavor profile. 43.31. loadbalancer flavorprofile show Show the details for a single flavor profile Usage: Table 43.112. Positional arguments Value Summary <flavor_profile> Name or uuid of the flavor profile to show. Table 43.113. Command arguments Value Summary -h, --help Show this help message and exit Table 43.114. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.115. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.116. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.117. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.32. loadbalancer healthmonitor create Create a health monitor Usage: Table 43.118. Positional arguments Value Summary <pool> Set the pool for the health monitor (name or id). Table 43.119. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the health monitor name. --delay <delay> Set the time in seconds, between sending probes to members. --domain-name <domain_name> Set the domain name, which be injected into the http Host Header to the backend server for HTTP health check. --expected-codes <codes> Set the list of http status codes expected in response from the member to declare it healthy. --http-method {GET,POST,DELETE,PUT,HEAD,OPTIONS,PATCH,CONNECT,TRACE} Set the http method that the health monitor uses for requests. --http-version <http_version> Set the http version. --timeout <timeout> Set the maximum time, in seconds, that a monitor waits to connect before it times out. This value must be less than the delay value. --max-retries <max_retries> The number of successful checks before changing the operating status of the member to ONLINE. --url-path <url_path> Set the http url path of the request sent by the monitor to test the health of a backend member. --type {PING,HTTP,TCP,HTTPS,TLS-HELLO,UDP-CONNECT,SCTP} Set the health monitor type. --max-retries-down <max_retries_down> Set the number of allowed check failures before changing the operating status of the member to ERROR. --enable Enable health monitor (default). --disable Disable health monitor. --wait Wait for action to complete. --tag <tag> Tag to be added to the health monitor (repeat option to set multiple tags) --no-tag No tags associated with the health monitor Table 43.120. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.121. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.122. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.123. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.33. loadbalancer healthmonitor delete Delete a health monitor Usage: Table 43.124. Positional arguments Value Summary <health_monitor> Health monitor to delete (name or id). Table 43.125. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete. 43.34. loadbalancer healthmonitor list List health monitors Usage: Table 43.126. Command arguments Value Summary -h, --help Show this help message and exit --tags <tag>[,<tag>,... ] List health monitor which have all given tag(s) (Comma-separated list of tags) --any-tags <tag>[,<tag>,... ] List health monitor which have any given tag(s) (Comma-separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude health monitor which have all given tag(s) (Comma-separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude health monitor which have any given tag(s) (Comma-separated list of tags) Table 43.127. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.128. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.129. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.130. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.35. loadbalancer healthmonitor set Update a health monitor Usage: Table 43.131. Positional arguments Value Summary <health_monitor> Health monitor to update (name or id). Table 43.132. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set health monitor name. --delay <delay> Set the time in seconds, between sending probes to members. --domain-name <domain_name> Set the domain name, which be injected into the http Host Header to the backend server for HTTP health check. --expected-codes <codes> Set the list of http status codes expected in response from the member to declare it healthy. --http-method {GET,POST,DELETE,PUT,HEAD,OPTIONS,PATCH,CONNECT,TRACE} Set the http method that the health monitor uses for requests. --http-version <http_version> Set the http version. --timeout <timeout> Set the maximum time, in seconds, that a monitor waits to connect before it times out. This value must be less than the delay value. --max-retries <max_retries> Set the number of successful checks before changing the operating status of the member to ONLINE. --max-retries-down <max_retries_down> Set the number of allowed check failures before changing the operating status of the member to ERROR. --url-path <url_path> Set the http url path of the request sent by the monitor to test the health of a backend member. --enable Enable health monitor. --disable Disable health monitor. --wait Wait for action to complete. --tag <tag> Tag to be added to the health monitor (repeat option to set multiple tags) --no-tag Clear tags associated with the health monitor. specify both --tag and --no-tag to overwrite current tags 43.36. loadbalancer healthmonitor show Show the details of a single health monitor Usage: Table 43.133. Positional arguments Value Summary <health_monitor> Name or uuid of the health monitor. Table 43.134. Command arguments Value Summary -h, --help Show this help message and exit Table 43.135. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.136. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.137. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.138. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.37. loadbalancer healthmonitor unset Clear health monitor settings Usage: Table 43.139. Positional arguments Value Summary <health_monitor> Health monitor to update (name or id). Table 43.140. Command arguments Value Summary -h, --help Show this help message and exit --domain-name Clear the health monitor domain name. --expected-codes Reset the health monitor expected codes to the api default. --http-method Reset the health monitor http method to the api default. --http-version Reset the health monitor http version to the api default. --max-retries-down Reset the health monitor max retries down to the api default. --name Clear the health monitor name. --url-path Clear the health monitor url path. --wait Wait for action to complete. --tag <tag> Tag to be removed from the health monitor (repeat option to remove multiple tags) --all-tag Clear all tags associated with the health monitor 43.38. loadbalancer l7policy create Create a l7policy Usage: Table 43.141. Positional arguments Value Summary <listener> Listener to add l7policy to (name or id). Table 43.142. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the l7policy name. --description <description> Set l7policy description. --action {REDIRECT_TO_URL,REDIRECT_TO_POOL,REDIRECT_PREFIX,REJECT} Set the action of the policy. --redirect-pool <pool> Set the pool to redirect requests to (name or id). --redirect-url <url> Set the url to redirect requests to. --redirect-prefix <url> Set the url prefix to redirect requests to. --redirect-http-code <redirect_http_code> Set the http response code for redirect_url or REDIRECT_PREFIX action. --position <position> Sequence number of this l7 policy. --enable Enable l7policy (default). --disable Disable l7policy. --wait Wait for action to complete. --tag <tag> Tag to be added to the l7policy (repeat option to set multiple tags) --no-tag No tags associated with the l7policy Table 43.143. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.144. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.145. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.146. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.39. loadbalancer l7policy delete Delete a l7policy Usage: Table 43.147. Positional arguments Value Summary <policy> L7policy to delete (name or id). Table 43.148. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete. 43.40. loadbalancer l7policy list List l7policies Usage: Table 43.149. Command arguments Value Summary -h, --help Show this help message and exit --listener LISTENER List l7policies that applied to the given listener (name or ID). --tags <tag>[,<tag>,... ] List l7policy which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List l7policy which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude l7policy which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude l7policy which have any given tag(s) (comma- separated list of tags) Table 43.150. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.151. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.152. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.153. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.41. loadbalancer l7policy set Update a l7policy Usage: Table 43.154. Positional arguments Value Summary <policy> L7policy to update (name or id). Table 43.155. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set l7policy name. --description <description> Set l7policy description. --action {REDIRECT_TO_URL,REDIRECT_TO_POOL,REDIRECT_PREFIX,REJECT} Set the action of the policy. --redirect-pool <pool> Set the pool to redirect requests to (name or id). --redirect-url <url> Set the url to redirect requests to. --redirect-prefix <url> Set the url prefix to redirect requests to. --redirect-http-code <redirect_http_code> Set the http response code for redirect_url or REDIRECT_PREFIX action. --position <position> Set sequence number of this l7 policy. --enable Enable l7policy. --disable Disable l7policy. --wait Wait for action to complete. --tag <tag> Tag to be added to the l7policy (repeat option to set multiple tags) --no-tag Clear tags associated with the l7policy. specify both --tag and --no-tag to overwrite current tags 43.42. loadbalancer l7policy show Show the details of a single l7policy Usage: Table 43.156. Positional arguments Value Summary <policy> Name or uuid of the l7policy. Table 43.157. Command arguments Value Summary -h, --help Show this help message and exit Table 43.158. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.159. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.160. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.161. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.43. loadbalancer l7policy unset Clear l7policy settings Usage: Table 43.162. Positional arguments Value Summary <policy> L7policy to update (name or id). Table 43.163. Command arguments Value Summary -h, --help Show this help message and exit --description Clear the l7policy description. --name Clear the l7policy name. --redirect-http-code Clear the l7policy redirect http code. --wait Wait for action to complete. --tag <tag> Tag to be removed from the l7policy (repeat option to remove multiple tags) --all-tag Clear all tags associated with the l7policy 43.44. loadbalancer l7rule create Create a l7rule Usage: Table 43.164. Positional arguments Value Summary <l7policy> L7policy to add l7rule to (name or id). Table 43.165. Command arguments Value Summary -h, --help Show this help message and exit --compare-type {REGEX,EQUAL_TO,CONTAINS,ENDS_WITH,STARTS_WITH} Set the compare type for the l7rule. --invert Invert l7rule. --value <value> Set the rule value to match on. --key <key> Set the key for the l7rule's value to match on. --type {FILE_TYPE,PATH,COOKIE,HOST_NAME,HEADER,SSL_CONN_HAS_CERT,SSL_VERIFY_RESULT,SSL_DN_FIELD} Set the type for the l7rule. --enable Enable l7rule (default). --disable Disable l7rule. --wait Wait for action to complete. --tag <tag> Tag to be added to the l7rule (repeat option to set multiple tags) --no-tag No tags associated with the l7rule Table 43.166. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.167. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.168. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.169. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.45. loadbalancer l7rule delete Delete a l7rule Usage: Table 43.170. Positional arguments Value Summary <l7policy> L7policy to delete rule from (name or id). <rule_id> L7rule to delete. Table 43.171. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete. 43.46. loadbalancer l7rule list List l7rules for l7policy Usage: Table 43.172. Positional arguments Value Summary <l7policy> L7policy to list rules for (name or id). Table 43.173. Command arguments Value Summary -h, --help Show this help message and exit --tags <tag>[,<tag>,... ] List l7rule which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List l7rule which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude l7rule which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude l7rule which have any given tag(s) (comma- separated list of tags) Table 43.174. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.175. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.176. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.177. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.47. loadbalancer l7rule set Update a l7rule Usage: Table 43.178. Positional arguments Value Summary <l7policy> L7policy to update l7rule on (name or id). <l7rule_id> L7rule to update. Table 43.179. Command arguments Value Summary -h, --help Show this help message and exit --compare-type {REGEX,EQUAL_TO,CONTAINS,ENDS_WITH,STARTS_WITH} Set the compare type for the l7rule. --invert Invert l7rule. --value <value> Set the rule value to match on. --key <key> Set the key for the l7rule's value to match on. --type {FILE_TYPE,PATH,COOKIE,HOST_NAME,HEADER,SSL_CONN_HAS_CERT,SSL_VERIFY_RESULT,SSL_DN_FIELD} Set the type for the l7rule. --enable Enable l7rule. --disable Disable l7rule. --wait Wait for action to complete. --tag <tag> Tag to be added to the l7rule (repeat option to set multiple tags) --no-tag Clear tags associated with the l7rule. specify both --tag and --no-tag to overwrite current tags 43.48. loadbalancer l7rule show Show the details of a single l7rule Usage: Table 43.180. Positional arguments Value Summary <l7policy> L7policy to show rule from (name or id). <l7rule_id> L7rule to show. Table 43.181. Command arguments Value Summary -h, --help Show this help message and exit Table 43.182. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.183. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.184. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.185. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.49. loadbalancer l7rule unset Clear l7rule settings Usage: Table 43.186. Positional arguments Value Summary <l7policy> L7policy to update (name or id). <l7rule_id> L7rule to update. Table 43.187. Command arguments Value Summary -h, --help Show this help message and exit --invert Reset the l7rule invert to the api default. --key Clear the l7rule key. --wait Wait for action to complete. --tag <tag> Tag to be removed from the l7rule (repeat option to remove multiple tags) --all-tag Clear all tags associated with the l7rule 43.50. loadbalancer list List load balancers Usage: Table 43.188. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List load balancers according to their name. --enable List enabled load balancers. --disable List disabled load balancers. --project <project-id> List load balancers according to their project (name or ID). --vip-network-id <vip_network_id> List load balancers according to their vip network (name or ID). --vip-subnet-id <vip_subnet_id> List load balancers according to their vip subnet (name or ID). --vip-qos-policy-id <vip_qos_policy_id> List load balancers according to their vip qos policy (name or ID). --vip-port-id <vip_port_id> List load balancers according to their vip port (name or ID). --provisioning-status {ACTIVE,ERROR,PENDING_CREATE,PENDING_UPDATE,PENDING_DELETE} List load balancers according to their provisioning status. --operating-status {ONLINE,DRAINING,OFFLINE,DEGRADED,ERROR,NO_MONITOR} List load balancers according to their operating status. --provider <provider> List load balancers according to their provider. --flavor <flavor> List load balancers according to their flavor. --availability-zone <availability_zone> List load balancers according to their availability zone. --tags <tag>[,<tag>,... ] List load balancer which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List load balancer which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude load balancer which have all given tag(s) (Comma-separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude load balancer which have any given tag(s) (Comma-separated list of tags) Table 43.189. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.190. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.191. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.192. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.51. loadbalancer listener create Create a listener Usage: Table 43.193. Positional arguments Value Summary <loadbalancer> Load balancer for the listener (name or id). Table 43.194. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the listener name. --description <description> Set the description of this listener. --protocol {TCP,HTTP,HTTPS,TERMINATED_HTTPS,UDP,SCTP,PROMETHEUS} The protocol for the listener. --connection-limit <limit> Set the maximum number of connections permitted for this listener. --default-pool <pool> Set the name or id of the pool used by the listener if no L7 policies match. --default-tls-container-ref <container_ref> The uri to the key manager service secrets container containing the certificate and key for TERMINATED_TLS listeners. --sni-container-refs [<container_ref> ... ] A list of uris to the key manager service secrets containers containing the certificates and keys for TERMINATED_TLS the listener using Server Name Indication. --insert-headers <header=value,... > A dictionary of optional headers to insert into the request before it is sent to the backend member. --protocol-port <port> Set the protocol port number for the listener. --timeout-client-data <timeout> Frontend client inactivity timeout in milliseconds. Default: 50000. --timeout-member-connect <timeout> Backend member connection timeout in milliseconds. Default: 5000. --timeout-member-data <timeout> Backend member inactivity timeout in milliseconds. Default: 50000. --timeout-tcp-inspect <timeout> Time, in milliseconds, to wait for additional tcp packets for content inspection. Default: 0. --enable Enable listener (default). --disable Disable listener. --client-ca-tls-container-ref <container_ref> The uri to the key manager service secrets container containing the CA certificate for TERMINATED_TLS listeners. --client-authentication {NONE,OPTIONAL,MANDATORY} The tls client authentication verify options for TERMINATED_TLS listeners. --client-crl-container-ref <client_crl_container_ref> The uri to the key manager service secrets container containting the CA revocation list file for TERMINATED_TLS listeners. --allowed-cidr [<allowed_cidr>] Cidr to allow access to the listener (can be set multiple times). --wait Wait for action to complete. --tls-ciphers <tls_ciphers> Set the tls ciphers to be used by the listener in OpenSSL format. --tls-version [<tls_versions>] Set the tls protocol version to be used by the listener (can be set multiple times). --alpn-protocol [<alpn_protocols>] Set the alpn protocol to be used by the listener (can be set multiple times). --tag <tag> Tag to be added to the listener (repeat option to set multiple tags) --no-tag No tags associated with the listener Table 43.195. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.196. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.197. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.198. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.52. loadbalancer listener delete Delete a listener Usage: Table 43.199. Positional arguments Value Summary <listener> Listener to delete (name or id). Table 43.200. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete. 43.53. loadbalancer listener list List listeners Usage: Table 43.201. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List listeners by listener name. --loadbalancer <loadbalancer> Filter by load balancer (name or id). --enable List enabled listeners. --disable List disabled listeners. --project <project> List listeners by project id. --tags <tag>[,<tag>,... ] List listener which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List listener which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude listener which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude listener which have any given tag(s) (comma- separated list of tags) Table 43.202. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.203. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.204. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.205. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.54. loadbalancer listener set Update a listener Usage: Table 43.206. Positional arguments Value Summary <listener> Listener to modify (name or id). Table 43.207. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the listener name. --description <description> Set the description of this listener. --connection-limit <limit> The maximum number of connections permitted for this listener. Default value is -1 which represents infinite connections. --default-pool <pool> The id of the pool used by the listener if no l7 policies match. --default-tls-container-ref <container-ref> The uri to the key manager service secrets container containing the certificate and key for TERMINATED_TLS listeners. --sni-container-refs [<container-ref> ... ] A list of uris to the key manager service secrets containers containing the certificates and keys for TERMINATED_TLS the listener using Server Name Indication. --insert-headers <header=value> A dictionary of optional headers to insert into the request before it is sent to the backend member. --timeout-client-data <timeout> Frontend client inactivity timeout in milliseconds. Default: 50000. --timeout-member-connect <timeout> Backend member connection timeout in milliseconds. Default: 5000. --timeout-member-data <timeout> Backend member inactivity timeout in milliseconds. Default: 50000. --timeout-tcp-inspect <timeout> Time, in milliseconds, to wait for additional tcp packets for content inspection. Default: 0. --enable Enable listener. --disable Disable listener. --client-ca-tls-container-ref <container_ref> The uri to the key manager service secrets container containing the CA certificate for TERMINATED_TLS listeners. --client-authentication {NONE,OPTIONAL,MANDATORY} The tls client authentication verify options for TERMINATED_TLS listeners. --client-crl-container-ref <client_crl_container_ref> The uri to the key manager service secrets container containting the CA revocation list file for TERMINATED_TLS listeners. --allowed-cidr [<allowed_cidr>] Cidr to allow access to the listener (can be set multiple times). --wait Wait for action to complete. --tls-ciphers <tls_ciphers> Set the tls ciphers to be used by the listener in OpenSSL format. --tls-version [<tls_versions>] Set the tls protocol version to be used by the listener (can be set multiple times). --alpn-protocol [<alpn_protocols>] Set the alpn protocol to be used by the listener (can be set multiple times). --tag <tag> Tag to be added to the listener (repeat option to set multiple tags) --no-tag Clear tags associated with the listener. specify both --tag and --no-tag to overwrite current tags 43.55. loadbalancer listener show Show the details of a single listener Usage: Table 43.208. Positional arguments Value Summary <listener> Name or uuid of the listener. Table 43.209. Command arguments Value Summary -h, --help Show this help message and exit Table 43.210. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.211. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.212. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.213. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.56. loadbalancer listener stats show Shows the current statistics for a listener. Usage: Table 43.214. Positional arguments Value Summary <listener> Name or uuid of the listener. Table 43.215. Command arguments Value Summary -h, --help Show this help message and exit Table 43.216. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.217. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.218. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.219. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.57. loadbalancer listener unset Clear listener settings Usage: Table 43.220. Positional arguments Value Summary <listener> Listener to modify (name or id). Table 43.221. Command arguments Value Summary -h, --help Show this help message and exit --name Clear the listener name. --description Clear the description of this listener. --connection-limit Reset the connection limit to the api default. --default-pool Clear the default pool from the listener. --default-tls-container-ref Remove the default tls container reference from the listener. --sni-container-refs Remove the tls sni container references from the listener. --insert-headers Clear the insert headers from the listener. --timeout-client-data Reset the client data timeout to the api default. --timeout-member-connect Reset the member connect timeout to the api default. --timeout-member-data Reset the member data timeout to the api default. --timeout-tcp-inspect Reset the tcp inspection timeout to the api default. --client-ca-tls-container-ref Clear the client ca tls container reference from the listener. --client-authentication Reset the client authentication setting to the api default. --client-crl-container-ref Clear the client crl container reference from the listener. --allowed-cidrs Clear all allowed cidrs from the listener. --tls-versions Clear all tls versions from the listener. --tls-ciphers Clear all tls ciphers from the listener. --wait Wait for action to complete. --alpn-protocols Clear all alpn protocols from the listener. --tag <tag> Tag to be removed from the listener (repeat option to remove multiple tags) --all-tag Clear all tags associated with the listener 43.58. loadbalancer member create Creating a member in a pool Usage: Table 43.222. Positional arguments Value Summary <pool> Id or name of the pool to create the member for. Table 43.223. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Name of the member. --disable-backup Disable member backup (default). --enable-backup Enable member backup. --weight <weight> The weight of a member determines the portion of requests or connections it services compared to the other members of the pool. --address <ip_address> The ip address of the backend member server. --subnet-id <subnet_id> The subnet id the member service is accessible from. --protocol-port <protocol_port> The protocol port number the backend member server is listening on. --monitor-port <monitor_port> An alternate protocol port used for health monitoring a backend member. --monitor-address <monitor_address> An alternate ip address used for health monitoring a backend member. --enable Enable member (default). --disable Disable member. --wait Wait for action to complete. --tag <tag> Tag to be added to the member (repeat option to set multiple tags) --no-tag No tags associated with the member Table 43.224. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.225. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.226. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.227. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.59. loadbalancer member delete Delete a member from a pool Usage: Table 43.228. Positional arguments Value Summary <pool> Pool name or id to delete the member from. <member> Name or id of the member to be deleted. Table 43.229. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete. 43.60. loadbalancer member list List members in a pool Usage: Table 43.230. Positional arguments Value Summary <pool> Pool name or id to list the members of. Table 43.231. Command arguments Value Summary -h, --help Show this help message and exit --tags <tag>[,<tag>,... ] List member which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List member which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude member which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude member which have any given tag(s) (comma- separated list of tags) Table 43.232. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.233. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.234. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.235. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.61. loadbalancer member set Update a member Usage: Table 43.236. Positional arguments Value Summary <pool> Pool that the member to update belongs to (name or ID). <member> Name or id of the member to update. Table 43.237. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the name of the member. --disable-backup Disable member backup (default). --enable-backup Enable member backup. --weight <weight> Set the weight of member in the pool. --monitor-port <monitor_port> An alternate protocol port used for health monitoring a backend member. --monitor-address <monitor_address> An alternate ip address used for health monitoring a backend member. --enable Set the admin_state_up to true. --disable Set the admin_state_up to false. --wait Wait for action to complete. --tag <tag> Tag to be added to the member (repeat option to set multiple tags) --no-tag Clear tags associated with the member. specify both --tag and --no-tag to overwrite current tags 43.62. loadbalancer member show Shows details of a single Member Usage: Table 43.238. Positional arguments Value Summary <pool> Pool name or id to show the members of. <member> Name or id of the member to show. Table 43.239. Command arguments Value Summary -h, --help Show this help message and exit Table 43.240. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.241. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.242. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.243. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.63. loadbalancer member unset Clear member settings Usage: Table 43.244. Positional arguments Value Summary <pool> Pool that the member to update belongs to (name or id). <member> Member to modify (name or id). Table 43.245. Command arguments Value Summary -h, --help Show this help message and exit --backup Clear the backup member flag. --monitor-address Clear the member monitor address. --monitor-port Clear the member monitor port. --name Clear the member name. --weight Reset the member weight to the api default. --wait Wait for action to complete. --tag <tag> Tag to be removed from the member (repeat option to remove multiple tags) --all-tag Clear all tags associated with the member 43.64. loadbalancer pool create Create a pool Usage: Table 43.246. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set pool name. --description <description> Set pool description. --protocol {TCP,HTTP,HTTPS,PROXY,PROXYV2,UDP,SCTP} Set the pool protocol. --listener <listener> Listener to add the pool to (name or id). --loadbalancer <load_balancer> Load balancer to add the pool to (name or id). --session-persistence <session persistence> Set the session persistence for the listener (key=value). --lb-algorithm {SOURCE_IP,ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP_PORT} Load balancing algorithm to use. --enable Enable pool (default). --disable Disable pool. --tls-container-ref <container-ref> The reference to the key manager service secrets container containing the certificate and key for ``tls_enabled`` pools to re-encrpt the traffic to backend member servers. --ca-tls-container-ref <ca_tls_container_ref> The reference to the key manager service secrets container containing the CA certificate for ``tls_enabled`` pools to check the backend member servers certificates. --crl-container-ref <crl_container_ref> The reference to the key manager service secrets container containting the CA revocation list file for ``tls_enabled`` pools to validate the backend member servers certificates. --enable-tls Enable backend member re-encryption. --disable-tls Disable backend member re-encryption. --wait Wait for action to complete. --tls-ciphers <tls_ciphers> Set the tls ciphers to be used by the pool in openssl cipher string format. --tls-version [<tls_versions>] Set the tls protocol version to be used by the pool (can be set multiple times). --alpn-protocol [<alpn_protocols>] Set the alpn protocol to be used by the pool (can be set multiple times). --tag <tag> Tag to be added to the pool (repeat option to set multiple tags) --no-tag No tags associated with the pool Table 43.247. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.248. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.249. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.250. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.65. loadbalancer pool delete Delete a pool Usage: Table 43.251. Positional arguments Value Summary <pool> Pool to delete (name or id). Table 43.252. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for action to complete. 43.66. loadbalancer pool list List pools Usage: Table 43.253. Command arguments Value Summary -h, --help Show this help message and exit --loadbalancer <loadbalancer> Filter by load balancer (name or id). --tags <tag>[,<tag>,... ] List pool which have all given tag(s) (comma-separated list of tags) --any-tags <tag>[,<tag>,... ] List pool which have any given tag(s) (comma-separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude pool which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude pool which have any given tag(s) (comma- separated list of tags) Table 43.254. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.255. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.256. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.257. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.67. loadbalancer pool set Update a pool Usage: Table 43.258. Positional arguments Value Summary <pool> Pool to update (name or id). Table 43.259. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the name of the pool. --description <description> Set the description of the pool. --session-persistence <session_persistence> Set the session persistence for the listener (key=value). --lb-algorithm {SOURCE_IP,ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP_PORT} Set the load balancing algorithm to use. --enable Enable pool. --disable Disable pool. --tls-container-ref <container-ref> The uri to the key manager service secrets container containing the certificate and key for TERMINATED_TLS pools to re-encrpt the traffic from TERMINATED_TLS listener to backend servers. --ca-tls-container-ref <ca_tls_container_ref> The uri to the key manager service secrets container containing the CA certificate for TERMINATED_TLS listeners to check the backend servers certificates in ssl traffic. --crl-container-ref <crl_container_ref> The uri to the key manager service secrets container containting the CA revocation list file for TERMINATED_TLS listeners to valid the backend servers certificates in ssl traffic. --enable-tls Enable backend associated members re-encryption. --disable-tls Disable backend associated members re-encryption. --wait Wait for action to complete. --tls-ciphers <tls_ciphers> Set the tls ciphers to be used by the pool in openssl cipher string format. --tls-version [<tls_versions>] Set the tls protocol version to be used by the pool (can be set multiple times). --alpn-protocol [<alpn_protocols>] Set the alpn protocol to be used by the pool (can be set multiple times). --tag <tag> Tag to be added to the pool (repeat option to set multiple tags) --no-tag Clear tags associated with the pool. specify both --tag and --no-tag to overwrite current tags 43.68. loadbalancer pool show Show the details of a single pool Usage: Table 43.260. Positional arguments Value Summary <pool> Name or uuid of the pool. Table 43.261. Command arguments Value Summary -h, --help Show this help message and exit Table 43.262. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.263. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.264. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.265. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.69. loadbalancer pool unset Clear pool settings Usage: Table 43.266. Positional arguments Value Summary <pool> Pool to modify (name or id). Table 43.267. Command arguments Value Summary -h, --help Show this help message and exit --name Clear the pool name. --description Clear the description of this pool. --ca-tls-container-ref Clear the certificate authority certificate reference on this pool. --crl-container-ref Clear the certificate revocation list reference on this pool. --session-persistence Disables session persistence on the pool. --tls-container-ref Clear the certificate reference for this pool. --tls-versions Clear all tls versions from the pool. --tls-ciphers Clear all tls ciphers from the pool. --wait Wait for action to complete. --alpn-protocols Clear all alpn protocols from the pool. --tag <tag> Tag to be removed from the pool (repeat option to remove multiple tags) --all-tag Clear all tags associated with the pool 43.70. loadbalancer provider capability list List specified provider driver's capabilities. Usage: Table 43.268. Positional arguments Value Summary <provider_name> Name of the provider driver. Table 43.269. Command arguments Value Summary -h, --help Show this help message and exit --flavor Get capabilities for flavor only. --availability-zone Get capabilities for availability zone only. Table 43.270. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.271. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.272. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.273. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.71. loadbalancer provider list List all providers Usage: Table 43.274. Command arguments Value Summary -h, --help Show this help message and exit Table 43.275. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.276. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.277. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.278. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.72. loadbalancer quota defaults show Show quota defaults Usage: Table 43.279. Command arguments Value Summary -h, --help Show this help message and exit Table 43.280. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.281. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.282. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.283. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.73. loadbalancer quota list List quotas Usage: Table 43.284. Command arguments Value Summary -h, --help Show this help message and exit --project <project-id> Name or uuid of the project. Table 43.285. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 43.286. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.287. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.288. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.74. loadbalancer quota reset Resets quotas to default quotas Usage: Table 43.289. Positional arguments Value Summary <project> Project to reset quotas (name or id). Table 43.290. Command arguments Value Summary -h, --help Show this help message and exit 43.75. loadbalancer quota set Update a quota Usage: Table 43.291. Positional arguments Value Summary <project> Name or uuid of the project. Table 43.292. Command arguments Value Summary -h, --help Show this help message and exit Table 43.293. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.294. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.295. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.296. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. Table 43.297. Quota limits Value Summary At least one of the following arguments is required.--healthmonitor <health_monitor> New value for the health monitor quota. value -1 means unlimited. --listener <listener> New value for the listener quota. value -1 means unlimited. --loadbalancer <load_balancer> New value for the load balancer quota limit. value -1 means unlimited. --member <member> New value for the member quota limit. value -1 means unlimited. --pool <pool> New value for the pool quota limit. value -1 means unlimited. --l7policy <l7policy> New value for the l7policy quota limit. value -1 means unlimited. --l7rule <l7rule> New value for the l7rule quota limit. value -1 means unlimited. 43.76. loadbalancer quota show Show the quota details for a project Usage: Table 43.298. Positional arguments Value Summary <project> Name or uuid of the project. Table 43.299. Command arguments Value Summary -h, --help Show this help message and exit Table 43.300. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.301. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.302. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.303. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.77. loadbalancer quota unset Clear quota settings Usage: Table 43.304. Positional arguments Value Summary <project> Name or uuid of the project. Table 43.305. Command arguments Value Summary -h, --help Show this help message and exit --loadbalancer Reset the load balancer quota to the default. --listener Reset the listener quota to the default. --pool Reset the pool quota to the default. --member Reset the member quota to the default. --healthmonitor Reset the health monitor quota to the default. --l7policy Reset the l7policy quota to the default. --l7rule Reset the l7rule quota to the default. 43.78. loadbalancer set Update a load balancer Usage: Table 43.306. Positional arguments Value Summary <load_balancer> Name or uuid of the load balancer to update. Table 43.307. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set load balancer name. --description <description> Set load balancer description. --vip-qos-policy-id <vip_qos_policy_id> Set qos policy id for vip port. unset with none . --enable Enable load balancer. --disable Disable load balancer. --wait Wait for action to complete. --tag <tag> Tag to be added to the load balancer (repeat option to set multiple tags) --no-tag Clear tags associated with the load balancer. specify both --tag and --no-tag to overwrite current tags 43.79. loadbalancer show Show the details for a single load balancer Usage: Table 43.308. Positional arguments Value Summary <load_balancer> Name or uuid of the load balancer. Table 43.309. Command arguments Value Summary -h, --help Show this help message and exit Table 43.310. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.311. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.312. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.313. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.80. loadbalancer stats show Shows the current statistics for a load balancer Usage: Table 43.314. Positional arguments Value Summary <load_balancer> Name or uuid of the load balancer. Table 43.315. Command arguments Value Summary -h, --help Show this help message and exit Table 43.316. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 43.317. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 43.318. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.319. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.81. loadbalancer status show Display load balancer status tree in json format Usage: Table 43.320. Positional arguments Value Summary <load_balancer> Name or uuid of the load balancer. Table 43.321. Command arguments Value Summary -h, --help Show this help message and exit 43.82. loadbalancer unset Clear load balancer settings Usage: Table 43.322. Positional arguments Value Summary <load_balancer> Name or uuid of the load balancer to update. Table 43.323. Command arguments Value Summary -h, --help Show this help message and exit --name Clear the load balancer name. --description Clear the load balancer description. --vip-qos-policy-id Clear the load balancer qos policy. --wait Wait for action to complete. --tag <tag> Tag to be removed from the load balancer (repeat option to remove multiple tags) --all-tag Clear all tags associated with the load balancer
[ "openstack loadbalancer amphora configure [-h] [--wait] <amphora-id>", "openstack loadbalancer amphora delete [-h] [--wait] <amphora-id>", "openstack loadbalancer amphora failover [-h] [--wait] <amphora-id>", "openstack loadbalancer amphora list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--loadbalancer <loadbalancer>] [--compute-id <compute-id>] [--role {BACKUP,MASTER,STANDALONE}] [--status {ALLOCATED,BOOTING,DELETED,ERROR,PENDING_CREATE,PENDING_DELETE,READY}] [--long]", "openstack loadbalancer amphora show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <amphora-id>", "openstack loadbalancer amphora stats show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--listener <listener>] <amphora-id>", "openstack loadbalancer availabilityzone create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name <name> --availabilityzoneprofile <availabilityzone_profile> [--description <description>] [--enable | --disable]", "openstack loadbalancer availabilityzone delete [-h] <availabilityzone>", "openstack loadbalancer availabilityzone list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--availabilityzoneprofile <availabilityzone_profile>] [--enable | --disable]", "openstack loadbalancer availabilityzone set [-h] [--description <description>] [--enable | --disable] <availabilityzone>", "openstack loadbalancer availabilityzone show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <availabilityzone>", "openstack loadbalancer availabilityzone unset [-h] [--description] <availabilityzone>", "openstack loadbalancer availabilityzoneprofile create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name <name> --provider <provider name> --availability-zone-data <availability_zone_data>", "openstack loadbalancer availabilityzoneprofile delete [-h] <availabilityzone_profile>", "openstack loadbalancer availabilityzoneprofile list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--provider <provider_name>]", "openstack loadbalancer availabilityzoneprofile set [-h] [--name <name>] [--provider <provider_name>] [--availability-zone-data <availability_zone_data>] <availabilityzone_profile>", "openstack loadbalancer availabilityzoneprofile show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <availabilityzone_profile>", "openstack loadbalancer create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] [--vip-address <vip_address>] [--vip-port-id <vip_port_id>] [--vip-subnet-id <vip_subnet_id>] [--vip-network-id <vip_network_id>] [--vip-qos-policy-id <vip_qos_policy_id>] [--additional-vip subnet-id=<name-or-uuid>[,ip-address=<ip>]] [--project <project>] [--provider <provider>] [--availability-zone <availability_zone>] [--enable | --disable] [--flavor <flavor>] [--wait] [--tag <tag> | --no-tag]", "openstack loadbalancer delete [-h] [--cascade] [--wait] <load_balancer>", "openstack loadbalancer failover [-h] [--wait] <load_balancer>", "openstack loadbalancer flavor create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name <name> --flavorprofile <flavor_profile> [--description <description>] [--enable | --disable]", "openstack loadbalancer flavor delete [-h] <flavor>", "openstack loadbalancer flavor list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--flavorprofile <flavor_profile>] [--enable | --disable]", "openstack loadbalancer flavor set [-h] [--name <name>] [--description <description>] [--enable | --disable] <flavor>", "openstack loadbalancer flavor show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <flavor>", "openstack loadbalancer flavor unset [-h] [--description] <flavor>", "openstack loadbalancer flavorprofile create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name <name> --provider <provider name> --flavor-data <flavor_data>", "openstack loadbalancer flavorprofile delete [-h] <flavor_profile>", "openstack loadbalancer flavorprofile list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--provider <provider_name>]", "openstack loadbalancer flavorprofile set [-h] [--name <name>] [--provider <provider_name>] [--flavor-data <flavor_data>] <flavor_profile>", "openstack loadbalancer flavorprofile show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <flavor_profile>", "openstack loadbalancer healthmonitor create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] --delay <delay> [--domain-name <domain_name>] [--expected-codes <codes>] [--http-method {GET,POST,DELETE,PUT,HEAD,OPTIONS,PATCH,CONNECT,TRACE}] [--http-version <http_version>] --timeout <timeout> --max-retries <max_retries> [--url-path <url_path>] --type {PING,HTTP,TCP,HTTPS,TLS-HELLO,UDP-CONNECT,SCTP} [--max-retries-down <max_retries_down>] [--enable | --disable] [--wait] [--tag <tag> | --no-tag] <pool>", "openstack loadbalancer healthmonitor delete [-h] [--wait] <health_monitor>", "openstack loadbalancer healthmonitor list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack loadbalancer healthmonitor set [-h] [--name <name>] [--delay <delay>] [--domain-name <domain_name>] [--expected-codes <codes>] [--http-method {GET,POST,DELETE,PUT,HEAD,OPTIONS,PATCH,CONNECT,TRACE}] [--http-version <http_version>] [--timeout <timeout>] [--max-retries <max_retries>] [--max-retries-down <max_retries_down>] [--url-path <url_path>] [--enable | --disable] [--wait] [--tag <tag>] [--no-tag] <health_monitor>", "openstack loadbalancer healthmonitor show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <health_monitor>", "openstack loadbalancer healthmonitor unset [-h] [--domain-name] [--expected-codes] [--http-method] [--http-version] [--max-retries-down] [--name] [--url-path] [--wait] [--tag <tag> | --all-tag] <health_monitor>", "openstack loadbalancer l7policy create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] --action {REDIRECT_TO_URL,REDIRECT_TO_POOL,REDIRECT_PREFIX,REJECT} [--redirect-pool <pool> | --redirect-url <url> | --redirect-prefix <url>] [--redirect-http-code <redirect_http_code>] [--position <position>] [--enable | --disable] [--wait] [--tag <tag> | --no-tag] <listener>", "openstack loadbalancer l7policy delete [-h] [--wait] <policy>", "openstack loadbalancer l7policy list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--listener LISTENER] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack loadbalancer l7policy set [-h] [--name <name>] [--description <description>] [--action {REDIRECT_TO_URL,REDIRECT_TO_POOL,REDIRECT_PREFIX,REJECT}] [--redirect-pool <pool> | --redirect-url <url> | --redirect-prefix <url>] [--redirect-http-code <redirect_http_code>] [--position <position>] [--enable | --disable] [--wait] [--tag <tag>] [--no-tag] <policy>", "openstack loadbalancer l7policy show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <policy>", "openstack loadbalancer l7policy unset [-h] [--description] [--name] [--redirect-http-code] [--wait] [--tag <tag> | --all-tag] <policy>", "openstack loadbalancer l7rule create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --compare-type {REGEX,EQUAL_TO,CONTAINS,ENDS_WITH,STARTS_WITH} [--invert] --value <value> [--key <key>] --type {FILE_TYPE,PATH,COOKIE,HOST_NAME,HEADER,SSL_CONN_HAS_CERT,SSL_VERIFY_RESULT,SSL_DN_FIELD} [--enable | --disable] [--wait] [--tag <tag> | --no-tag] <l7policy>", "openstack loadbalancer l7rule delete [-h] [--wait] <l7policy> <rule_id>", "openstack loadbalancer l7rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]] <l7policy>", "openstack loadbalancer l7rule set [-h] [--compare-type {REGEX,EQUAL_TO,CONTAINS,ENDS_WITH,STARTS_WITH}] [--invert] [--value <value>] [--key <key>] [--type {FILE_TYPE,PATH,COOKIE,HOST_NAME,HEADER,SSL_CONN_HAS_CERT,SSL_VERIFY_RESULT,SSL_DN_FIELD}] [--enable | --disable] [--wait] [--tag <tag>] [--no-tag] <l7policy> <l7rule_id>", "openstack loadbalancer l7rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <l7policy> <l7rule_id>", "openstack loadbalancer l7rule unset [-h] [--invert] [--key] [--wait] [--tag <tag> | --all-tag] <l7policy> <l7rule_id>", "openstack loadbalancer list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--enable | --disable] [--project <project-id>] [--vip-network-id <vip_network_id>] [--vip-subnet-id <vip_subnet_id>] [--vip-qos-policy-id <vip_qos_policy_id>] [--vip-port-id <vip_port_id>] [--provisioning-status {ACTIVE,ERROR,PENDING_CREATE,PENDING_UPDATE,PENDING_DELETE}] [--operating-status {ONLINE,DRAINING,OFFLINE,DEGRADED,ERROR,NO_MONITOR}] [--provider <provider>] [--flavor <flavor>] [--availability-zone <availability_zone>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack loadbalancer listener create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] --protocol {TCP,HTTP,HTTPS,TERMINATED_HTTPS,UDP,SCTP,PROMETHEUS} [--connection-limit <limit>] [--default-pool <pool>] [--default-tls-container-ref <container_ref>] [--sni-container-refs [<container_ref> ...]] [--insert-headers <header=value,...>] --protocol-port <port> [--timeout-client-data <timeout>] [--timeout-member-connect <timeout>] [--timeout-member-data <timeout>] [--timeout-tcp-inspect <timeout>] [--enable | --disable] [--client-ca-tls-container-ref <container_ref>] [--client-authentication {NONE,OPTIONAL,MANDATORY}] [--client-crl-container-ref <client_crl_container_ref>] [--allowed-cidr [<allowed_cidr>]] [--wait] [--tls-ciphers <tls_ciphers>] [--tls-version [<tls_versions>]] [--alpn-protocol [<alpn_protocols>]] [--tag <tag> | --no-tag] <loadbalancer>", "openstack loadbalancer listener delete [-h] [--wait] <listener>", "openstack loadbalancer listener list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--loadbalancer <loadbalancer>] [--enable | --disable] [--project <project>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack loadbalancer listener set [-h] [--name <name>] [--description <description>] [--connection-limit <limit>] [--default-pool <pool>] [--default-tls-container-ref <container-ref>] [--sni-container-refs [<container-ref> ...]] [--insert-headers <header=value>] [--timeout-client-data <timeout>] [--timeout-member-connect <timeout>] [--timeout-member-data <timeout>] [--timeout-tcp-inspect <timeout>] [--enable | --disable] [--client-ca-tls-container-ref <container_ref>] [--client-authentication {NONE,OPTIONAL,MANDATORY}] [--client-crl-container-ref <client_crl_container_ref>] [--allowed-cidr [<allowed_cidr>]] [--wait] [--tls-ciphers <tls_ciphers>] [--tls-version [<tls_versions>]] [--alpn-protocol [<alpn_protocols>]] [--tag <tag>] [--no-tag] <listener>", "openstack loadbalancer listener show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <listener>", "openstack loadbalancer listener stats show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <listener>", "openstack loadbalancer listener unset [-h] [--name] [--description] [--connection-limit] [--default-pool] [--default-tls-container-ref] [--sni-container-refs] [--insert-headers] [--timeout-client-data] [--timeout-member-connect] [--timeout-member-data] [--timeout-tcp-inspect] [--client-ca-tls-container-ref] [--client-authentication] [--client-crl-container-ref] [--allowed-cidrs] [--tls-versions] [--tls-ciphers] [--wait] [--alpn-protocols] [--tag <tag> | --all-tag] <listener>", "openstack loadbalancer member create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--disable-backup | --enable-backup] [--weight <weight>] --address <ip_address> [--subnet-id <subnet_id>] --protocol-port <protocol_port> [--monitor-port <monitor_port>] [--monitor-address <monitor_address>] [--enable | --disable] [--wait] [--tag <tag> | --no-tag] <pool>", "openstack loadbalancer member delete [-h] [--wait] <pool> <member>", "openstack loadbalancer member list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]] <pool>", "openstack loadbalancer member set [-h] [--name <name>] [--disable-backup | --enable-backup] [--weight <weight>] [--monitor-port <monitor_port>] [--monitor-address <monitor_address>] [--enable | --disable] [--wait] [--tag <tag>] [--no-tag] <pool> <member>", "openstack loadbalancer member show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <pool> <member>", "openstack loadbalancer member unset [-h] [--backup] [--monitor-address] [--monitor-port] [--name] [--weight] [--wait] [--tag <tag> | --all-tag] <pool> <member>", "openstack loadbalancer pool create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] --protocol {TCP,HTTP,HTTPS,PROXY,PROXYV2,UDP,SCTP} (--listener <listener> | --loadbalancer <load_balancer>) [--session-persistence <session persistence>] --lb-algorithm {SOURCE_IP,ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP_PORT} [--enable | --disable] [--tls-container-ref <container-ref>] [--ca-tls-container-ref <ca_tls_container_ref>] [--crl-container-ref <crl_container_ref>] [--enable-tls | --disable-tls] [--wait] [--tls-ciphers <tls_ciphers>] [--tls-version [<tls_versions>]] [--alpn-protocol [<alpn_protocols>]] [--tag <tag> | --no-tag]", "openstack loadbalancer pool delete [-h] [--wait] <pool>", "openstack loadbalancer pool list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--loadbalancer <loadbalancer>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack loadbalancer pool set [-h] [--name <name>] [--description <description>] [--session-persistence <session_persistence>] [--lb-algorithm {SOURCE_IP,ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP_PORT}] [--enable | --disable] [--tls-container-ref <container-ref>] [--ca-tls-container-ref <ca_tls_container_ref>] [--crl-container-ref <crl_container_ref>] [--enable-tls | --disable-tls] [--wait] [--tls-ciphers <tls_ciphers>] [--tls-version [<tls_versions>]] [--alpn-protocol [<alpn_protocols>]] [--tag <tag>] [--no-tag] <pool>", "openstack loadbalancer pool show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <pool>", "openstack loadbalancer pool unset [-h] [--name] [--description] [--ca-tls-container-ref] [--crl-container-ref] [--session-persistence] [--tls-container-ref] [--tls-versions] [--tls-ciphers] [--wait] [--alpn-protocols] [--tag <tag> | --all-tag] <pool>", "openstack loadbalancer provider capability list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--flavor | --availability-zone] <provider_name>", "openstack loadbalancer provider list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]", "openstack loadbalancer quota defaults show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]", "openstack loadbalancer quota list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--project <project-id>]", "openstack loadbalancer quota reset [-h] <project>", "openstack loadbalancer quota set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--healthmonitor <health_monitor>] [--listener <listener>] [--loadbalancer <load_balancer>] [--member <member>] [--pool <pool>] [--l7policy <l7policy>] [--l7rule <l7rule>] <project>", "openstack loadbalancer quota show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <project>", "openstack loadbalancer quota unset [-h] [--loadbalancer] [--listener] [--pool] [--member] [--healthmonitor] [--l7policy] [--l7rule] <project>", "openstack loadbalancer set [-h] [--name <name>] [--description <description>] [--vip-qos-policy-id <vip_qos_policy_id>] [--enable | --disable] [--wait] [--tag <tag>] [--no-tag] <load_balancer>", "openstack loadbalancer show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <load_balancer>", "openstack loadbalancer stats show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <load_balancer>", "openstack loadbalancer status show [-h] <load_balancer>", "openstack loadbalancer unset [-h] [--name] [--description] [--vip-qos-policy-id] [--wait] [--tag <tag> | --all-tag] <load_balancer>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/loadbalancer
Chapter 13. Logging, events, and monitoring
Chapter 13. Logging, events, and monitoring 13.1. Viewing virtual machine logs 13.1.1. About virtual machine logs Logs are collected for OpenShift Container Platform builds, deployments, and pods. In OpenShift Virtualization, virtual machine logs can be retrieved from the virtual machine launcher pod in either the web console or the CLI. The -f option follows the log output in real time, which is useful for monitoring progress and error checking. If the launcher pod is failing to start, use the -- option to see the logs of the last attempt. Warning ErrImagePull and ImagePullBackOff errors can be caused by an incorrect deployment configuration or problems with the images that are referenced. 13.1.2. Viewing virtual machine logs in the CLI Get virtual machine logs from the virtual machine launcher pod. Procedure Use the following command: USD oc logs <virt-launcher-name> 13.1.3. Viewing virtual machine logs in the web console Get virtual machine logs from the associated virtual machine launcher pod. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. In the Details tab, click the virt-launcher-<name> pod in the Pod section. Click Logs . 13.2. Viewing events 13.2.1. About virtual machine events OpenShift Container Platform events are records of important life-cycle information in a namespace and are useful for monitoring and troubleshooting resource scheduling, creation, and deletion issues. OpenShift Virtualization adds events for virtual machines and virtual machine instances. These can be viewed from either the web console or the CLI. See also: Viewing system event information in an OpenShift Container Platform cluster . 13.2.2. Viewing the events for a virtual machine in the web console You can view the stream events for a running a virtual machine from the Virtual Machine Overview panel of the web console. The ▮▮ button pauses the events stream. The ▶ button continues a paused events stream. Procedure Click Workloads Virtualization from the side menu. Click the Virtual Machines tab. Select a virtual machine to open the Virtual Machine Overview screen. Click Events to view all events for the virtual machine. 13.2.3. Viewing namespace events in the CLI Use the OpenShift Container Platform client to get the events for a namespace. Procedure In the namespace, use the oc get command: USD oc get events 13.2.4. Viewing resource events in the CLI Events are included in the resource description, which you can get using the OpenShift Container Platform client. Procedure In the namespace, use the oc describe command. The following example shows how to get the events for a virtual machine, a virtual machine instance, and the virt-launcher pod for a virtual machine: USD oc describe vm <vm> USD oc describe vmi <vmi> USD oc describe pod virt-launcher-<name> 13.3. Diagnosing data volumes using events and conditions Use the oc describe command to analyze and help resolve issues with data volumes. 13.3.1. About conditions and events Diagnose data volume issues by examining the output of the Conditions and Events sections generated by the command: USD oc describe dv <DataVolume> There are three Types in the Conditions section that display: Bound Running Ready The Events section provides the following additional information: Type of event Reason for logging Source of the event Message containing additional diagnostic information. The output from oc describe does not always contains Events . An event is generated when either Status , Reason , or Message changes. Both conditions and events react to changes in the state of the data volume. For example, if you misspell the URL during an import operation, the import generates a 404 message. That message change generates an event with a reason. The output in the Conditions section is updated as well. 13.3.2. Analyzing data volumes using conditions and events By inspecting the Conditions and Events sections generated by the describe command, you determine the state of the data volume in relation to persistent volume claims (PVCs), and whether or not an operation is actively running or completed. You might also receive messages that offer specific details about the status of the data volume, and how it came to be in its current state. There are many different combinations of conditions. Each must be evaluated in its unique context. Examples of various combinations follow. Bound - A successfully bound PVC displays in this example. Note that the Type is Bound , so the Status is True . If the PVC is not bound, the Status is False . When the PVC is bound, an event is generated stating that the PVC is bound. In this case, the Reason is Bound and Status is True . The Message indicates which PVC owns the data volume. Message , in the Events section, provides further details including how long the PVC has been bound ( Age ) and by what resource ( From ), in this case datavolume-controller : Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound Running - In this case, note that Type is Running and Status is False , indicating that an event has occurred that caused an attempted operation to fail, changing the Status from True to False . However, note that Reason is Completed and the Message field indicates Import Complete . In the Events section, the Reason and Message contain additional troubleshooting information about the failed operation. In this example, the Message displays an inability to connect due to a 404 , listed in the Events section's first Warning . From this information, you conclude that an import operation was running, creating contention for other operations that are attempting to access the data volume: Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found Ready - If Type is Ready and Status is True , then the data volume is ready to be used, as in the following example. If the data volume is not ready to be used, the Status is False : Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready 13.4. Viewing information about virtual machine workloads You can view high-level information about your virtual machines by using the Virtual Machines dashboard in the OpenShift Container Platform web console. 13.4.1. About the Virtual Machines dashboard Access virtual machines from the OpenShift Container Platform web console by navigating to the Workloads Virtualization page. The Workloads Virtualization page contains two tabs: Virtual Machines Virtual Machine Templates The following cards describe each virtual machine: Details provides identifying information about the virtual machine, including: Name Namespace Date of creation Node name IP address Inventory lists the virtual machine's resources, including: Network interface controllers (NICs) Disks Status includes: The current status of the virtual machine A note indicating whether or not the QEMU guest agent is installed on the virtual machine Utilization includes charts that display usage data for: CPU Memory Filesystem Network transfer Note Use the drop-down list to choose a duration for the utilization data. The available options are 1 Hour , 6 Hours , and 24 Hours . Events lists messages about virtual machine activity over the past hour. To view additional events, click View all . 13.5. Monitoring virtual machine health A virtual machine instance (VMI) can become unhealthy due to transient issues such as connectivity loss, deadlocks, or problems with external dependencies. A health check periodically performs diagnostics on a VMI by using any combination of the readiness and liveness probes. 13.5.1. About readiness and liveness probes Use readiness and liveness probes to detect and handle unhealthy virtual machine instances (VMIs). You can include one or more probes in the specification of the VMI to ensure that traffic does not reach a VMI that is not ready for it and that a new instance is created when a VMI becomes unresponsive. A readiness probe determines whether a VMI is ready to accept service requests. If the probe fails, the VMI is removed from the list of available endpoints until the VMI is ready. A liveness probe determines whether a VMI is responsive. If the probe fails, the VMI is deleted and a new instance is created to restore responsiveness. You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachineInstance object. These fields support the following tests: HTTP GET The probe determines the health of the VMI by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized. TCP socket The probe attempts to open a socket to the VMI. The VMI is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete. 13.5.2. Defining an HTTP readiness probe Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine instance (VMI) configuration. Procedure Include details of the readiness probe in the VMI configuration file. Sample readiness probe with an HTTP GET test # ... spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8 # ... 1 The HTTP GET request to perform to connect to the VMI. 2 The port of the VMI that the probe queries. In the above example, the probe queries port 1500. 3 The path to access on the HTTP server. In the above example, if the handler for the server's /healthz path returns a success code, the VMI is considered to be healthy. If the handler returns a failure code, the VMI is removed from the list of available endpoints. 4 The time, in seconds, after the VMI starts before the readiness probe is initiated. 5 The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 6 The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . 7 The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked Unready . 8 The number of times that the probe must report success, after a failure, to be considered successful. The default is 1. Create the VMI by running the following command: USD oc create -f <file_name>.yaml 13.5.3. Defining a TCP readiness probe Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine instance (VMI) configuration. Procedure Include details of the TCP readiness probe in the VMI configuration file. Sample readiness probe with a TCP socket test ... spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5 ... 1 The time, in seconds, after the VMI starts before the readiness probe is initiated. 2 The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 3 The TCP action to perform. 4 The port of the VMI that the probe queries. 5 The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . Create the VMI by running the following command: USD oc create -f <file_name>.yaml 13.5.4. Defining an HTTP liveness probe Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine instance (VMI) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test. Procedure Include details of the HTTP liveness probe in the VMI configuration file. Sample liveness probe with an HTTP GET test # ... spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6 # ... 1 The time, in seconds, after the VMI starts before the liveness probe is initiated. 2 The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 3 The HTTP GET request to perform to connect to the VMI. 4 The port of the VMI that the probe queries. In the above example, the probe queries port 1500. The VMI installs and runs a minimal HTTP server on port 1500 via cloud-init. 5 The path to access on the HTTP server. In the above example, if the handler for the server's /healthz path returns a success code, the VMI is considered to be healthy. If the handler returns a failure code, the VMI is deleted and a new instance is created. 6 The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . Create the VMI by running the following command: USD oc create -f <file_name>.yaml 13.5.5. Template: Virtual machine configuration file for defining health checks apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-fedora name: vm-fedora spec: template: metadata: labels: special: vm-fedora spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M readinessProbe: httpGet: port: 1500 initialDelaySeconds: 120 periodSeconds: 20 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 3 terminationGracePeriodSeconds: 180 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-registry-disk-demo - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } bootcmd: - setenforce 0 - dnf install -y nmap-ncat - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!' name: cloudinitdisk 13.5.6. Additional resources Monitoring application health by using health checks 13.6. Using the OpenShift Container Platform dashboard to get cluster information Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by clicking Home > Dashboards > Overview from the OpenShift Container Platform web console. The OpenShift Container Platform dashboard provides various cluster information, captured in individual dashboard cards . 13.6.1. About the OpenShift Container Platform dashboards page The OpenShift Container Platform dashboard consists of the following cards: Details provides a brief overview of informational cluster details. Status include ok , error , warning , in progress , and unknown . Resources can add custom status names. Cluster ID Provider Version Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about: Number of nodes Number of pods Persistent storage volume claims Virtual machines (available if OpenShift Virtualization is installed) Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment). Cluster Health summarizes the current health of the cluster as a whole, including relevant alerts and descriptions. If OpenShift Virtualization is installed, the overall health of OpenShift Virtualization is diagnosed as well. If more than one subsystem is present, click See All to view the status of each subsystem. Cluster Capacity charts help administrators understand when additional resources are required in the cluster. The charts contain an inner ring that displays current consumption, while an outer ring displays thresholds configured for the resource, including information about: CPU time Memory allocation Storage consumed Network resources consumed Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption. Events lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host. Top Consumers helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage). 13.7. Reviewing resource usage by virtual machines Dashboards in the OpenShift Container Platform web console provide visual representations of cluster metrics to help you to quickly understand the state of your cluster. Dashboards belong to the monitoring stack that provides monitoring for core platform components. The OpenShift Virtualization dashboard provides data on resource consumption for virtual machines and associated pods. The visualization metrics displayed in the OpenShift Virtualization dashboard are based on Prometheus Query Language (PromQL) queries . A monitoring role is required to monitor user-defined namespaces in the OpenShift Virtualization dashboard. 13.7.1. About reviewing top consumers In the OpenShift Virtualization dashboard, you can select a specific time period and view the top consumers of resources within that time period. Top consumers are virtual machines or virt-launcher pods that are consuming the highest amount of resources. The following table shows resources monitored in the dashboard and describes the metrics associated with each resource for top consumers. Monitored resources Description Memory swap traffic Virtual machines consuming the most memory pressure when swapping memory. vCPU wait Virtual machines experiencing the maximum wait time (in seconds) for their vCPUs. CPU usage by pod The virt-launcher pods that are using the most CPU. Network traffic Virtual machines that are saturating the network by receiving the most amount of network traffic (in bytes). Storage traffic Virtual machines with the highest amount (in bytes) of storage-related traffic. Storage IOPS Virtual machines with the highest amount of I/O operations per second over a time period. Memory usage The virt-launcher pods that are using the most memory (in bytes). Note Viewing the maximum resource consumption is limited to the top five consumers. 13.7.2. Reviewing top consumers In the Administrator perspective, you can view the OpenShift Virtualization dashboard where top consumers of resources are displayed. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective in the OpenShift Virtualization web console, navigate to Observe Dashboards . Select the KubeVirt/Infrastructure Resources/Top Consumers dashboard from the Dashboard list. Select a predefined time period from the drop-down menu for Period. You can review the data for top consumers in the tables. Optional: Click Inspect to view or edit the Prometheus Query Language (PromQL) query associated with the top consumers for a table. 13.7.3. Additional resources Monitoring overview Reviewing monitoring dashboards 13.8. OpenShift Container Platform cluster monitoring, logging, and Telemetry OpenShift Container Platform provides various resources for monitoring at the cluster level. 13.8.1. About OpenShift Container Platform monitoring OpenShift Container Platform includes a pre-configured, pre-installed, and self-updating monitoring stack that provides monitoring for core platform components . OpenShift Container Platform delivers monitoring best practices out of the box. A set of alerts are included by default that immediately notify cluster administrators about issues with a cluster. Default dashboards in the OpenShift Container Platform web console include visual representations of cluster metrics to help you to quickly understand the state of your cluster. After installing OpenShift Container Platform 4.9, cluster administrators can optionally enable monitoring for user-defined projects . By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects. You can then query metrics, review dashboards, and manage alerting rules and silences for your own projects in the OpenShift Container Platform web console. Note Cluster administrators can grant developers and other users permission to monitor their own projects. Privileges are granted by assigning one of the predefined monitoring roles. 13.8.2. About logging subsystem components The logging subsystem components include a collector deployed to each node in the OpenShift Container Platform cluster that collects all node and container logs and writes them to a log store. You can use a centralized web UI to create rich visualizations and dashboards with the aggregated data. The major components of the logging subsystem are: collection - This is the component that collects logs from the cluster, formats them, and forwards them to the log store. The current implementation is Fluentd. log store - This is where the logs are stored. The default implementation is Elasticsearch. You can use the default Elasticsearch log store or forward logs to external log stores. The default log store is optimized and tested for short-term storage. visualization - This is the UI component you can use to view logs, graphs, charts, and so forth. The current implementation is Kibana. For more information on OpenShift Logging, see the OpenShift Logging documentation. 13.8.3. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document. This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use. 13.8.3.1. Information collected by Telemetry The following information is collected by Telemetry: 13.8.3.1.1. System information Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update The unique random identifier that is generated during an installation Configuration details that help Red Hat Support to provide beneficial support for customers, including node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services The OpenShift Container Platform framework components installed in a cluster and their condition and status Events for all namespaces listed as "related objects" for a degraded Operator Information about degraded software Information about the validity of certificates The name of the provider platform that OpenShift Container Platform is deployed on and the data center location 13.8.3.1.2. Sizing Information Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each The number of running virtual machine instances in a cluster The number of etcd members and the number of objects stored in the etcd cluster Number of application builds by build strategy type 13.8.3.1.3. Usage information Usage information about components, features, and extensions Usage details about Technology Previews and unsupported configurations Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat's privacy practices. 13.8.4. CLI troubleshooting and debugging commands For a list of the oc client troubleshooting and debugging commands, see the OpenShift Container Platform CLI tools documentation. 13.9. Prometheus queries for virtual resources OpenShift Virtualization provides metrics for monitoring how infrastructure resources are consumed in the cluster. The metrics cover the following resources: vCPU Network Storage Guest memory swapping Use the OpenShift Container Platform monitoring dashboard to query virtualization metrics. 13.9.1. Prerequisites To use the vCPU metric, the schedstats=enable kernel argument must be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. See the OpenShift Container Platform machine configuration tasks documentation for more information on applying a kernel argument. For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests. 13.9.2. Querying metrics The OpenShift Container Platform monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring. As a cluster administrator , you can query metrics for all core OpenShift Container Platform and user-defined projects. As a developer , you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project. 13.9.2.1. Querying metrics for all projects as a cluster administrator As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI. Note Only cluster administrators have access to the third-party UIs provided with OpenShift Container Platform Monitoring. Prerequisites You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective within the OpenShift Container Platform web console, select Observe Metrics . Select Insert Metric at Cursor to view a list of predefined queries. To create a custom query, add your Prometheus Query Language (PromQL) query to the Expression field. To add multiple queries, select Add Query . To delete a query, select to the query, then choose Delete query . To disable a query from being run, select to the query and choose Disable query . Select Run Queries to run the queries that you have created. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message. Note Queries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs. Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL. Additional resources See the Prometheus query documentation for more information about creating PromQL queries. 13.9.2.2. Querying metrics for user-defined projects as a developer You can access metrics for a user-defined project as a developer or as a user with view permissions for the project. In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project. Note Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time. Developers cannot access the third-party UIs provided with OpenShift Container Platform monitoring that are for core platform components. Instead, use the Metrics UI for your user-defined project. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. You have enabled monitoring for user-defined projects. You have deployed a service in a user-defined project. You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored. Procedure From the Developer perspective in the OpenShift Container Platform web console, select Observe Metrics . Select the project that you want to view metrics for in the Project: list. Choose a query from the Select Query list, or run a custom PromQL query by selecting Show PromQL . Note In the Developer perspective, you can only run one query at a time. Additional resources See the Prometheus query documentation for more information about creating PromQL queries. 13.9.3. Virtualization metrics The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. Note The following examples use topk queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output. 13.9.3.1. vCPU metrics The following query can identify virtual machines that are waiting for Input/Output (I/O): kubevirt_vmi_vcpu_wait_seconds Returns the wait time (in seconds) for a virtual machine's vCPU. A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O. Note To query the vCPU metric, the schedstats=enable kernel argument must first be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. Example vCPU wait time query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 1 1 This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period. 13.9.3.2. Network metrics The following queries can identify virtual machines that are saturating the network: kubevirt_vmi_network_receive_bytes_total Returns the total amount of traffic received (in bytes) on the virtual machine's network. kubevirt_vmi_network_transmit_bytes_total Returns the total amount of traffic transmitted (in bytes) on the virtual machine's network. Example network traffic query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1 1 This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period. 13.9.3.3. Storage metrics 13.9.3.3.1. Storage-related traffic The following queries can identify VMs that are writing large amounts of data: kubevirt_vmi_storage_read_traffic_bytes_total Returns the total amount (in bytes) of the virtual machine's storage-related traffic. kubevirt_vmi_storage_write_traffic_bytes_total Returns the total amount of storage writes (in bytes) of the virtual machine's storage-related traffic. Example storage-related traffic query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1 1 This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period. 13.9.3.3.2. I/O performance The following queries can determine the I/O performance of storage devices: kubevirt_vmi_storage_iops_read_total Returns the amount of write I/O operations the virtual machine is performing per second. kubevirt_vmi_storage_iops_write_total Returns the amount of read I/O operations the virtual machine is performing per second. Example I/O performance query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1 1 This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period. 13.9.3.4. Guest memory swapping metrics The following queries can identify which swap-enabled guests are performing the most memory swapping: kubevirt_vmi_memory_swap_in_traffic_bytes_total Returns the total amount (in bytes) of memory the virtual guest is swapping in. kubevirt_vmi_memory_swap_out_traffic_bytes_total Returns the total amount (in bytes) of memory the virtual guest is swapping out. Example memory swapping query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 1 1 This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period. Note Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue. 13.9.4. Additional resources Monitoring overview 13.10. Collecting data for Red Hat Support When you submit a support case to Red Hat Support, it is helpful to provide debugging information for OpenShift Container Platform and OpenShift Virtualization by using the following tools: must-gather tool The must-gather tool collects diagnostic information, including resource definitions and service logs. Prometheus Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Alertmanager The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems. 13.10.1. Collecting data about your environment Collecting data about your environment minimizes the time required to analyze and determine the root cause. Prerequisites Set the retention time for Prometheus metrics data to a minimum of seven days. Configure the Alertmanager to capture relevant alerts and to send them to a dedicated mailbox so that they can be viewed and persisted outside the cluster. Record the exact number of affected nodes and virtual machines. Procedure Collect must-gather data for the cluster by using the default must-gather image. Collect must-gather data for Red Hat OpenShift Container Storage, if necessary. Collect must-gather data for OpenShift Virtualization by using the OpenShift Virtualization must-gather image. Collect Prometheus metrics for the cluster. 13.10.1.1. Additional resources Configuring the retention time for Prometheus metrics data Configuring the Alertmanager to send alert notifications to external systems Collecting must-gather data for OpenShift Container Platform Collecting must-gather data for Red Hat OpenShift Container Storage Collecting must-gather data for OpenShift Virtualization Collecting Prometheus metrics for all projects as a cluster administrator 13.10.2. Collecting data about virtual machines Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause. Prerequisites Windows VMs: Record the Windows patch update details for Red Hat Support. Install the latest version of the VirtIO drivers. The VirtIO drivers include the QEMU guest agent. If Remote Desktop Protocol (RDP) is enabled, try to connect to the VMs with RDP to determine whether there is a problem with the connection software. Procedure Collect detailed must-gather data about the malfunctioning VMs. Collect screenshots of VMs that have crashed before you restart them. Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network. 13.10.2.1. Additional resources Installing VirtIO drivers on Windows VMs Downloading and installing VirtIO drivers on Windows VMs without host access Connecting to Windows VMs with RDP using the web console or the command line Collecting must-gather data about virtual machines 13.10.3. Using the must-gather tool for OpenShift Virtualization You can collect data about OpenShift Virtualization resources by running the must-gather command with the OpenShift Virtualization image. The default data collection includes information about the following resources: OpenShift Virtualization Operator namespaces, including child objects OpenShift Virtualization custom resource definitions Namespaces that contain virtual machines Basic virtual machine definitions Procedure Run the following command to collect data about OpenShift Virtualization: USD oc adm must-gather --image-stream=openshift/must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v{HCOVersion} 13.10.3.1. must-gather tool options You can specify a combination of scripts and environment variables for the following options: Collecting detailed virtual machine (VM) information from a namespace Collecting detailed information about specified VMs Collecting image and image stream information Limiting the maximum number of parallel processes used by the must-gather tool 13.10.3.1.1. Parameters Environment variables You can specify environment variables for a compatible script. NS=<namespace_name> Collect virtual machine information, including virt-launcher pod details, from the namespace that you specify. The VirtualMachine and VirtualMachineInstance CR data is collected for all namespaces. VM=<vm_name> Collect details about a particular virtual machine. To use this option, you must also specify a namespace by using the NS environment variable. PROS=<number_of_processes> Modify the maximum number of parallel processes that the must-gather tool uses. The default value is 5 . Important Using too many parallel processes can cause performance issues. Increasing the maximum number of parallel processes is not recommended. Scripts Each script is only compatible with certain environment variable combinations. gather_vms_details Collect VM log files, VM definitions, and namespaces (and their child objects) that belong to OpenShift Virtualization resources. If you use this parameter without specifying a namespace or VM, the must-gather tool collects this data for all VMs in the cluster. This script is compatible with all environment variables, but you must specify a namespace if you use the VM variable. gather Use the default must-gather script, which collects cluster data from all namespaces and includes only basic VM information. This script is only compatible with the PROS variable. gather_images Collect image and image stream custom resource information. This script is only compatible with the PROS variable. 13.10.3.1.2. Usage and examples Environment variables are optional. You can run a script by itself or with one or more compatible environment variables. Table 13.1. Compatible parameters Script Compatible environment variable gather_vms_details For a namespace: NS=<namespace_name> For a VM: VM=<vm_name> NS=<namespace_name> PROS=<number_of_processes> gather PROS=<number_of_processes> gather_images PROS=<number_of_processes> To customize the data that must-gather collects, you append a double dash ( -- ) to the command, followed by a space and one or more compatible parameters. Syntax USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 \ -- <environment_variable_1> <environment_variable_2> <script_name> Detailed VM information The following command collects detailed VM information for the my-vm VM in the mynamespace namespace: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 \ -- NS=mynamespace VM=my-vm gather_vms_details 1 1 The NS environment variable is mandatory if you use the VM environment variable. Default data collection limited to three parallel processes The following command collects default must-gather information by using a maximum of three parallel processes: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 \ -- PROS=3 gather Image and image stream information The following command collects image and image stream information from the cluster: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 \ -- gather_images 13.10.3.2. Additional resources About the must-gather tool
[ "oc logs <virt-launcher-name>", "oc get events", "oc describe vm <vm>", "oc describe vmi <vmi>", "oc describe pod virt-launcher-<name>", "oc describe dv <DataVolume>", "Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready", "spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8", "oc create -f <file_name>.yaml", "spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5", "oc create -f <file_name>.yaml", "spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-fedora name: vm-fedora spec: template: metadata: labels: special: vm-fedora spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M readinessProbe: httpGet: port: 1500 initialDelaySeconds: 120 periodSeconds: 20 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 3 terminationGracePeriodSeconds: 180 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-registry-disk-demo - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } bootcmd: - setenforce 0 - dnf install -y nmap-ncat - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\\\n\\\\nHello World!' name: cloudinitdisk", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 1", "oc adm must-gather --image-stream=openshift/must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v{HCOVersion}", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 -- <environment_variable_1> <environment_variable_2> <script_name>", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 -- NS=mynamespace VM=my-vm gather_vms_details 1", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 -- PROS=3 gather", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 -- gather_images" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/virtualization/logging-events-and-monitoring
Chapter 7. Types
Chapter 7. Types This section enumerates all the data types that are available in the API. 7.1. AccessProtocol enum Represents the access protocols supported by Gluster volumes. gluster and nfs are enabled by default. Table 7.1. Values summary Name Summary cifs CIFS access protocol. gluster Gluster access protocol. nfs NFS access protocol. 7.2. Action struct Table 7.2. Attributes summary Name Type Summary activate Boolean allow_partial_import Boolean async Boolean attachment DiskAttachment authorized_key AuthorizedKey auto_pinning_policy AutoPinningPolicy bricks GlusterBrick[ ] certificates Certificate[ ] check_connectivity Boolean clone Boolean clone_permissions Boolean cluster Cluster collapse_snapshots Boolean comment String Free text containing comments about this object. commit_on_success Boolean connection StorageConnection connectivity_timeout Integer correlation_id String data_center DataCenter deploy_hosted_engine Boolean description String A human-readable description in plain text. details GlusterVolumeProfileDetails directory String discard_snapshots Boolean discovered_targets IscsiDetails[ ] disk Disk disk_profile DiskProfile disks Disk[ ] exclusive Boolean fault Fault fence_type String filename String filter Boolean fix_layout Boolean follow String force Boolean grace_period GracePeriod host Host id String A unique identifier. image String image_transfer ImageTransfer import_as_template Boolean is_attached Boolean iscsi IscsiDetails iscsi_targets String[ ] job Job lease StorageDomainLease logical_units LogicalUnit[ ] maintenance_after_restart Boolean maintenance_enabled Boolean migrate_vms_in_affinity_closure Boolean modified_bonds HostNic[ ] modified_labels NetworkLabel[ ] modified_network_attachments NetworkAttachment[ ] name String A human-readable name in plain text. optimize_cpu_settings Boolean option Option pause Boolean permission Permission power_management PowerManagement proxy_ticket ProxyTicket quota Quota reason String reassign_bad_macs Boolean reboot Boolean registration_configuration RegistrationConfiguration remote_viewer_connection_file String removed_bonds HostNic[ ] removed_labels NetworkLabel[ ] removed_network_attachments NetworkAttachment[ ] resolution_type String restore_memory Boolean root_password String seal Boolean snapshot Snapshot source_host Host ssh Ssh status String stop_gluster_service Boolean storage_domain StorageDomain storage_domains StorageDomain[ ] succeeded Boolean synchronized_network_attachments NetworkAttachment[ ] template Template ticket Ticket timeout Integer undeploy_hosted_engine Boolean upgrade_action ClusterUpgradeAction upgrade_percent_complete Integer use_cloud_init Boolean use_ignition Boolean use_initialization Boolean use_sysprep Boolean virtual_functions_configuration HostNicVirtualFunctionsConfiguration vm Vm vnic_profile_mappings VnicProfileMapping[ ] volatile Boolean 7.3. AffinityGroup struct An affinity group represents a group of virtual machines with a defined relationship. Table 7.3. Attributes summary Name Type Summary broken Boolean Specifies if the affinity group is broken. comment String Free text containing comments about this object. description String A human-readable description in plain text. enforcing Boolean Specifies whether the affinity group uses hard or soft enforcement of the affinity applied to virtual machines that are members of that affinity group. hosts_rule AffinityRule Specifies the affinity rule applied between virtual machines and hosts that are members of this affinity group. id String A unique identifier. name String A human-readable name in plain text. positive Boolean Specifies whether the affinity group applies positive affinity or negative affinity to virtual machines that are members of that affinity group. priority Decimal Priority of the affinity group. vms_rule AffinityRule Specifies the affinity rule applied to virtual machines that are members of this affinity group. 7.3.1. broken Specifies if the affinity group is broken. Affinity group is considered broken when any of its rules are not satisfied. Broken field is a computed field in the engine. Because of that, this field is only usable in GET requests. 7.3.2. enforcing Specifies whether the affinity group uses hard or soft enforcement of the affinity applied to virtual machines that are members of that affinity group. Warning Please note that this attribute has been deprecated since version 4.1 of the engine, and will be removed in the future. Use the vms_rule attribute from now on. 7.3.3. positive Specifies whether the affinity group applies positive affinity or negative affinity to virtual machines that are members of that affinity group. Warning Please note that this attribute has been deprecated since version 4.1 of the engine, and will be removed in the future. Use the vms_rule attribute from now on. Table 7.4. Links summary Name Type Summary cluster Cluster A reference to the cluster to which the affinity group applies. host_labels AffinityLabel[ ] A list of all host labels assigned to this affinity group. hosts Host[ ] A list of all hosts assigned to this affinity group. vm_labels AffinityLabel[ ] A list of all virtual machine labels assigned to this affinity group. vms Vm[ ] A list of all virtual machines assigned to this affinity group. 7.4. AffinityLabel struct The affinity label can influence virtual machine scheduling. It is most frequently used to create a sub-cluster from the available hosts. Table 7.5. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. has_implicit_affinity_group Boolean This property enables the legacy behavior for labels. id String A unique identifier. name String A human-readable name in plain text. read_only Boolean The read_only property marks a label that can not be modified. 7.4.1. has_implicit_affinity_group This property enables the legacy behavior for labels. If true , the label acts also as a positive enforcing VM-to-host affinity group. This parameter is only used for clusters with compatibility version 4.3 or lower. 7.4.2. read_only The read_only property marks a label that can not be modified. This is usually the case when listing internally-generated labels. Table 7.6. Links summary Name Type Summary hosts Host[ ] A list of hosts that were labeled using this scheduling label. vms Vm[ ] A list of virtual machines that were labeled using this scheduling label. 7.5. AffinityRule struct Generic rule definition for affinity group. Each supported resource type (virtual machine, host) is controlled by a separate rule. This allows expressing of rules like: no affinity between defined virtual machines, but hard affinity between defined virtual machines and virtual hosts. Table 7.7. Attributes summary Name Type Summary enabled Boolean Specifies whether the affinity group uses this rule or not. enforcing Boolean Specifies whether the affinity group uses hard or soft enforcement of the affinity applied to the resources that are controlled by this rule. positive Boolean Specifies whether the affinity group applies positive affinity or negative affinity to the resources that are controlled by this rule. 7.5.1. enabled Specifies whether the affinity group uses this rule or not. This attribute is optional during creation and is considered to be true when it is not provided. In case this attribute is not provided to the update operation, it is considered to be true if AffinityGroup positive attribute is set as well. The backend enabled value will be preserved when both enabled and positive attributes are missing. 7.5.2. enforcing Specifies whether the affinity group uses hard or soft enforcement of the affinity applied to the resources that are controlled by this rule. This argument is mandatory if the rule is enabled and is ignored when the rule is disabled. 7.5.3. positive Specifies whether the affinity group applies positive affinity or negative affinity to the resources that are controlled by this rule. This argument is mandatory if the rule is enabled and is ignored when the rule is disabled. 7.6. Agent struct Type representing a fence agent. Table 7.8. Attributes summary Name Type Summary address String Fence agent address. comment String Free text containing comments about this object. concurrent Boolean Specifies whether the agent should be used concurrently or sequentially. description String A human-readable description in plain text. encrypt_options Boolean Specifies whether the options should be encrypted. id String A unique identifier. name String A human-readable name in plain text. options Option[ ] Fence agent options (comma-delimited list of key-value pairs). order Integer The order of this agent if used with other agents. password String Fence agent password. port Integer Fence agent port. type String Fence agent type. username String Fence agent user name. Table 7.9. Links summary Name Type Summary host Host Reference to the host service. 7.6.1. host Reference to the host service. Each fence agent belongs to a single host. 7.7. AgentConfiguration struct Deprecated Agent configuration settings. Ignored, because the deployment of OpenStack Neutron agent is dropped since Red Hat Virtualization 4.4.0. The deployment of OpenStack hosts can be done by Red Hat OpenStack Platform Director or TripleO. Table 7.10. Attributes summary Name Type Summary address String broker_type MessageBrokerType network_mappings String Not recommended to use, because the Open vSwitch interface mappings are managed by VDSM since Red Hat Virtualization 4. password String port Integer username String 7.7.1. network_mappings Not recommended to use, because the Open vSwitch interface mappings are managed by VDSM since Red Hat Virtualization 4.2.0. 7.8. Api struct This type contains the information returned by the root service of the API. To get that information send a request like this: The result will be like this: <api> <link rel="hosts" href="/ovirt-engine/api/hosts"/> <link rel="vms" href="/ovirt-engine/api/vms"/> ... <product_info> <name>oVirt Engine</name> <vendor>ovirt.org</vendor> <version> <build>0</build> <full_version>4.1.0_master</full_version> <major>4</major> <minor>1</minor> <revision>0</revision> </version> </product_info> <special_objects> <link rel="templates/blank" href="..."/> <link rel="tags/root" href="..."/> </special_objects> <summary> <vms> <total>10</total> <active>3</active> </vms> <hosts> <total>2</total> <active>2</active> </hosts> <users> <total>8</total> <active>2</active> </users> <storage_domains> <total>2</total> <active>2</active> </storage_domains> </summary> <time>2016-12-12T12:22:25.866+01:00</time> </api> Table 7.11. Attributes summary Name Type Summary product_info ProductInfo Information about the product, such as its name, the name of the vendor, and the version. special_objects SpecialObjects References to special objects, such as the blank template and the root of the hierarchy of tags. summary ApiSummary A summary containing the total number of relevant objects, such as virtual machines, hosts, and storage domains. time Date The date and time when this information was generated. Table 7.12. Links summary Name Type Summary authenticated_user User Reference to the authenticated user. effective_user User Reference to the effective user. 7.8.1. authenticated_user Reference to the authenticated user. The authenticated user is the user whose credentials were verified in order to accept the current request. In the current version of the system the authenticated user and the effective user are always the same. In the future, when support for user impersonation is introduced, they will be potentially different. 7.8.2. effective_user Reference to the effective user. The effective user is the user whose permissions apply during the current request. In the current version of the system the authenticated user and the effective user are always the same. In the future, when support for user impersonation is introduced, they will be potentially different. 7.9. ApiSummary struct A summary containing the total number of relevant objects, such as virtual machines, hosts, and storage domains. Table 7.13. Attributes summary Name Type Summary hosts ApiSummaryItem The summary of hosts. storage_domains ApiSummaryItem The summary of storage domains. users ApiSummaryItem The summary of users. vms ApiSummaryItem The summary of virtual machines. 7.10. ApiSummaryItem struct This type contains an item of the API summary. Each item contains the total and active number of some kind of object. Table 7.14. Attributes summary Name Type Summary active Integer The total number of active objects. total Integer The total number of objects. 7.11. Application struct Represents an application installed on a virtual machine. Applications are reported by the guest agent, if you deploy one on the virtual machine operating system. To get that information send a request like this: The result will be like this: <application href="/ovirt-engine/api/vms/123/applications/456" id="456"> <name>application-test-1.0.0-0.el7</name> <vm href="/ovirt-engine/api/vms/123" id="123"/> </application> Table 7.15. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.16. Links summary Name Type Summary vm Vm A reference to the virtual machine the application is installed on. 7.12. Architecture enum Table 7.17. Values summary Name Summary aarch64 AARCH64 CPU architecture. ppc64 s390x IBM S390X CPU architecture. undefined x86_64 7.12.1. s390x IBM S390X CPU architecture. Needs to be specified for virtual machines and clusters running on the S390X architecture. Note that S390 is often used in an ambiguous way to describe either the general machine architecture as such or its 31-bit variant. S390X is used specifically for the 64-bit architecture, which is in line with the other architectures, like X86_64 or PPC64. 7.13. AuthorizedKey struct Table 7.18. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. key String name String A human-readable name in plain text. Table 7.19. Links summary Name Type Summary user User 7.14. AutoNumaStatus enum Table 7.20. Values summary Name Summary disable enable unknown 7.15. AutoPinningPolicy enum Type representing what the CPU and NUMA pinning policy is. Important Since version 4.5 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. Instead please use CpuPinningPolicy. Table 7.21. Values summary Name Summary adjust The CPU and NUMA pinning will be configured by the dedicated host. disabled The CPU and NUMA pinning won't be calculated. existing The CPU and NUMA pinning will be configured by the virtual machine current state. 7.15.1. adjust The CPU and NUMA pinning will be configured by the dedicated host. Currently, its implication is that the CPU and NUMA pinning will use the dedicated host CPU topology. The virtual machine configuration will automatically be set to fit the host to get the highest possible performance. 7.15.2. disabled The CPU and NUMA pinning won't be calculated. Currently, its implication is that the CPU and NUMA pinning won't be calculated to the current virtual machine configuration. By default the VM topology set with 1 Socket, 1 Core and 1 Thread. 7.15.3. existing The CPU and NUMA pinning will be configured by the virtual machine current state. Currently, its implication is that the CPU and NUMA pinning will use the provided virtual machine CPU topology. Without given CPU topology it will use the engine defaults (the VM topology set with 1 Socket, 1 Core and 1 Thread). 7.16. Backup struct Table 7.22. Attributes summary Name Type Summary comment String Free text containing comments about this object. creation_date Date The backup creation date. description String A human-readable description in plain text. from_checkpoint_id String The checkpoint id at which to start the incremental backup. id String A unique identifier. modification_date Date The backup modification date. name String A human-readable name in plain text. phase BackupPhase The phase of the backup operation. to_checkpoint_id String The checkpoint id created by this backup operation. 7.16.1. to_checkpoint_id The checkpoint id created by this backup operation. This id can be used as the fromCheckpointId in the incremental backup. Table 7.23. Links summary Name Type Summary disks Disk[ ] A list of disks contained in the virtual machine backup. host Host The host that was used to start the backup. snapshot Snapshot A reference to the snapshot created if the backup is using a snapshot. vm Vm A reference to the virtual machine associated with the backup. 7.17. BackupPhase enum Table 7.24. Values summary Name Summary failed The final phase, indicates that the backup has failed. finalizing In this phase, the backup is invoking 'stop_backup' operation in order to complete the backup and unlock the relevant disk. initializing The initial phase of the backup. ready The phase means that the relevant disks' backup URLs are ready to be used and downloaded using image transfer. starting The phase is set before invoking 'start_backup' operation in vdsm/libvirt (which means that 'stop_backup' should be invoked to complete the flow). succeeded The final phase, indicates that the backup has finished successfully. 7.17.1. initializing The initial phase of the backup. It is set on entity creation. 7.18. Balance struct Table 7.25. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.26. Links summary Name Type Summary scheduling_policy SchedulingPolicy scheduling_policy_unit SchedulingPolicyUnit 7.19. Bios struct Table 7.27. Attributes summary Name Type Summary boot_menu BootMenu type BiosType Chipset and BIOS type combination. 7.20. BiosType enum Type representing a chipset and a BIOS type combination. Table 7.28. Values summary Name Summary cluster_default Use the cluster-wide default. i440fx_sea_bios i440fx chipset with SeaBIOS. q35_ovmf q35 chipset with OVMF (UEFI) BIOS. q35_sea_bios q35 chipset with SeaBIOS. q35_secure_boot q35 chipset with OVMF (UEFI) BIOS with SecureBoot enabled. 7.20.1. cluster_default Use the cluster-wide default. This value cannot be used for cluster. 7.20.2. i440fx_sea_bios i440fx chipset with SeaBIOS. For non-x86 architectures this is the only non-default value allowed. 7.21. BlockStatistic struct Table 7.29. Attributes summary Name Type Summary statistics Statistic[ ] 7.22. Bonding struct Represents a network interfaces bond. Table 7.30. Attributes summary Name Type Summary ad_partner_mac Mac The ad_partner_mac property of the partner bond in mode 4. options Option[ ] A list of option elements for a bonded interface. slaves HostNic[ ] A list of slave NICs for a bonded interface. 7.22.1. ad_partner_mac The ad_partner_mac property of the partner bond in mode 4. Bond mode 4 is the 802.3ad standard, which is also called dynamic link aggregation. See Wikipedia and Presentation for more information. ad_partner_mac is the MAC address of the system (switch) at the other end of a bond. This parameter is read-only. Setting it will have no effect on the bond. It is retrieved from /sys/class/net/bondX/bonding/ad_partner_mac file on the system where the bond is located. 7.22.2. options A list of option elements for a bonded interface. Each option contains property name and value attributes. Only required when adding bonded interfaces. 7.22.3. slaves A list of slave NICs for a bonded interface. Only required when adding bonded interfaces. Table 7.31. Links summary Name Type Summary active_slave HostNic The active_slave property of the bond in modes that support it (active-backup, balance-alb and balance-tlb). 7.22.4. active_slave The active_slave property of the bond in modes that support it (active-backup, balance-alb and balance-tlb). See Linux documentation for further details. This parameter is read-only. Setting it will have no effect on the bond. It is retrieved from /sys/class/net/bondX/bonding/active_slave file on the system where the bond is located. For example: Will respond: <host_nic href="/ovirt-engine/api/hosts/123/nics/321" id="321"> ... <bonding> <slaves> <host_nic href="/ovirt-engine/api/hosts/123/nics/456" id="456" /> ... </slaves> <active_slave href="/ovirt-engine/api/hosts/123/nics/456" id="456" /> </bonding> ... </host_nic> 7.23. Bookmark struct Represents a bookmark in the system. Table 7.32. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. value String The bookmark value, representing a search in the engine. 7.24. Boot struct Configuration of the boot sequence of a virtual machine. Table 7.33. Attributes summary Name Type Summary devices BootDevice[ ] Ordered list of boot devices. 7.24.1. devices Ordered list of boot devices. The virtual machine will try to boot from the given boot devices, in the given order. 7.25. BootDevice enum Represents the kinds of devices that a virtual machine can boot from. Table 7.34. Values summary Name Summary cdrom Boot from CD-ROM. hd Boot from the hard drive. network Boot from the network, using PXE. 7.25.1. cdrom Boot from CD-ROM. The CD-ROM can be chosen from the list of ISO files available in an ISO domain attached to the ata center that the virtual machine belongs to. 7.25.2. network Boot from the network, using PXE. It is necessary to have PXE configured on the network that the virtual machine is connected to. 7.26. BootMenu struct Represents boot menu configuration for virtual machines and templates. Table 7.35. Attributes summary Name Type Summary enabled Boolean Whether the boot menu is enabled for this virtual machine (or template), or not. 7.27. BootProtocol enum Defines the options of the IP address assignment method to a NIC. Table 7.36. Values summary Name Summary autoconf Stateless address auto-configuration. dhcp Dynamic host configuration protocol. none No address configuration. poly_dhcp_autoconf DHCP alongside Stateless address auto-configuration (SLAAC). static Statically-defined address, mask and gateway. 7.27.1. autoconf Stateless address auto-configuration. The mechanism is defined by RFC 4862 . Please refer to this wikipedia article for more information. Note The value is valid for IPv6 addresses only. 7.27.2. dhcp Dynamic host configuration protocol. Please refer to this wikipedia article for more information. 7.27.3. poly_dhcp_autoconf DHCP alongside Stateless address auto-configuration (SLAAC). The SLAAC mechanism is defined by RFC 4862 . Please refer to the Stateless address auto-configuration article and the DHCP article for more information. Note The value is valid for IPv6 addresses only. 7.28. BrickProfileDetail struct Table 7.37. Attributes summary Name Type Summary profile_details ProfileDetail[ ] Table 7.38. Links summary Name Type Summary brick GlusterBrick 7.29. Cdrom struct Table 7.39. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. file File id String A unique identifier. name String A human-readable name in plain text. Table 7.40. Links summary Name Type Summary instance_type InstanceType Optionally references to an instance type the device is used by. template Template Optionally references to a template the device is used by. vm Vm Do not use this element, use vms instead. vms Vm[ ] References to the virtual machines that are using this device. 7.29.1. vms References to the virtual machines that are using this device. A device may be used by several virtual machines; for example, a shared disk my be used simultaneously by two or more virtual machines. 7.30. Certificate struct Table 7.41. Attributes summary Name Type Summary comment String Free text containing comments about this object. content String description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. organization String subject String 7.31. Checkpoint struct Table 7.42. Attributes summary Name Type Summary comment String Free text containing comments about this object. creation_date Date The checkpoint creation date. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. parent_id String The parent checkpoint id. state CheckpointState The state of the checkpoint. Table 7.43. Links summary Name Type Summary disks Disk[ ] A list of disks contained in the backup checkpoint. vm Vm A reference to the virtual machine associated with the checkpoint. 7.32. CheckpointState enum Table 7.44. Values summary Name Summary created The initial state of the checkpoint. invalid The INVALID state set when a checkpoint cannot be used anymore for incremental backup and should be removed (For example, after committing to an older VM snapshot). 7.32.1. created The initial state of the checkpoint. It is set on entity creation. 7.33. CloudInit struct Deprecated type to specify cloud-init configuration. This type has been deprecated and replaced by alternative attributes inside the Initialization type. See the cloud_init attribute documentation for details. Table 7.45. Attributes summary Name Type Summary authorized_keys AuthorizedKey[ ] files File[ ] host Host network_configuration NetworkConfiguration regenerate_ssh_keys Boolean timezone String users User[ ] 7.34. CloudInitNetworkProtocol enum Defines the values for the cloud-init protocol. This protocol decides how the cloud-init network parameters are formatted before being passed to the virtual machine in order to be processed by cloud-init. Protocols supported are cloud-init version dependent. For more information, see Network Configuration Sources Table 7.46. Values summary Name Summary eni Legacy protocol. openstack_metadata Successor of the ENI protocol, with support for IPv6 and more. 7.34.1. eni Legacy protocol. Does not support IPv6. For more information, see Network Configuration ENI (Legacy) 7.34.2. openstack_metadata Successor of the ENI protocol, with support for IPv6 and more. This is the default value. For more information, see API: Proxy neutron configuration to guest instance 7.35. Cluster struct Type representation of a cluster. A JSON representation of a cluster: Table 7.47. Attributes summary Name Type Summary ballooning_enabled Boolean bios_type BiosType Chipset and BIOS type combination. comment String Free text containing comments about this object. cpu Cpu custom_scheduling_policy_properties Property[ ] Custom scheduling policy properties of the cluster. description String A human-readable description in plain text. display Display error_handling ErrorHandling fencing_policy FencingPolicy A custom fencing policy can be defined for a cluster. fips_mode FipsMode FIPS mode of the cluster. firewall_type FirewallType The type of firewall to be used on hosts in this cluster. gluster_service Boolean gluster_tuned_profile String The name of the tuned profile. ha_reservation Boolean id String A unique identifier. ksm Ksm log_max_memory_used_threshold Integer The memory consumption threshold for logging audit log events. log_max_memory_used_threshold_type LogMaxMemoryUsedThresholdType The memory consumption threshold type for logging audit log events. maintenance_reason_required Boolean This property has no longer any relevance and has been deprecated. memory_policy MemoryPolicy migration MigrationOptions Reference to cluster-wide configuration of migration of a running virtual machine to another host. name String A human-readable name in plain text. optional_reason Boolean This property has no longer any relevance and has been deprecated. required_rng_sources RngSource[ ] Set of random number generator (RNG) sources required from each host in the cluster. serial_number SerialNumber supported_versions Version[ ] switch_type SwitchType The type of switch to be used by all networks in given cluster. threads_as_cores Boolean trusted_service Boolean tunnel_migration Boolean upgrade_correlation_id String The upgrade correlation identifier. upgrade_in_progress Boolean Indicates if an upgrade has been started for the cluster. upgrade_percent_complete Integer If an upgrade is in progress, the upgrade's reported percent complete. version Version The compatibility version of the cluster. virt_service Boolean vnc_encryption Boolean Enable VNC encryption. 7.35.1. bios_type Chipset and BIOS type combination. This value is used as default for all virtual machines in the cluster having biosType set to CLUSTER_DEFAULT . 7.35.2. custom_scheduling_policy_properties Custom scheduling policy properties of the cluster. These optional properties override the properties of the scheduling policy specified by the scheduling_policy link, and apply only for this specific cluster. For example, to update the custom properties of the cluster, send a request: With a request body: <cluster> <custom_scheduling_policy_properties> <property> <name>HighUtilization</name> <value>70</value> </property> </custom_scheduling_policy_properties> </cluster> Update operations using the custom_scheduling_policy_properties attribute will not update the the properties of the scheduling policy specified by the scheduling_policy link, they will only be reflected on this specific cluster. 7.35.3. fencing_policy A custom fencing policy can be defined for a cluster. For example: With request body like this: <cluster> <fencing_policy> <enabled>true</enabled> <skip_if_sd_active> <enabled>false</enabled> </skip_if_sd_active> <skip_if_connectivity_broken> <enabled>false</enabled> <threshold>50</threshold> </skip_if_connectivity_broken> </fencing_policy> </cluster> 7.35.4. fips_mode FIPS mode of the cluster. FIPS mode represents the cluster's policy towards hosts. Hosts added to the cluster will be checked to fulfill the cluster's FIPS mode, making them non-operational if they do not. Unless a value is explicity provided, new clusters are initialized by default to UNDEFINED . This value changes automatically to the FIPS mode of the first host added to the cluster. 7.35.5. gluster_tuned_profile The name of the tuned profile. Tuned profile to set on all the hosts in the cluster. This is not mandatory and relevant only for clusters with Gluster service. 7.35.6. log_max_memory_used_threshold The memory consumption threshold for logging audit log events. For percentage, an audit log event is logged if the used memory is more that the value specified. For absolute value, an audit log event is logged when the the free memory falls below the value specified in MB. 7.35.7. log_max_memory_used_threshold_type The memory consumption threshold type for logging audit log events. You can choose between 'percentage' and 'absolute_value_in_mb'. 7.35.8. maintenance_reason_required This property has no longer any relevance and has been deprecated. Its default value is true, 7.35.9. migration Reference to cluster-wide configuration of migration of a running virtual machine to another host. Note API for querying migration policy by ID returned by this method is not implemented yet. Use /ovirt-engine/api/options/MigrationPolicies to get a list of all migration policies with their IDs. 7.35.10. optional_reason This property has no longer any relevance and has been deprecated. Its default value is true. 7.35.11. required_rng_sources Set of random number generator (RNG) sources required from each host in the cluster. When read, it returns the implicit urandom (for cluster version 4.1 and higher) or random (for cluster version 4.0 and lower) plus additional selected RNG sources. When written, the implicit urandom and random RNG sources cannot be removed. Important Before version 4.1 of the engine, the set of required random number generators was completely controllable by the administrator; any source could be added or removed, including the random source. But starting with version 4.1, the urandom and random sources will always be part of the set, and can't be removed. Important Engine version 4.1 introduces a new RNG source urandom that replaces random RNG source in clusters with compatibility version 4.1 or higher. 7.35.12. upgrade_correlation_id The upgrade correlation identifier. Use to correlate events detailing the cluster upgrade to the upgrade itself. 7.35.13. version The compatibility version of the cluster. All hosts in this cluster must support at least this compatibility version. For example: Will respond with: <cluster> ... <version> <major>4</major> <minor>0</minor> </version> ... </cluster> To update the compatibility version, use: With a request body like this: <cluster> <version> <major>4</major> <minor>1</minor> </version> </cluster> In order to update the cluster compatibility version, all hosts in the cluster must support the new compatibility version. 7.35.14. vnc_encryption Enable VNC encryption. Default value for this property is false. Table 7.48. Links summary Name Type Summary affinity_groups AffinityGroup[ ] cpu_profiles CpuProfile[ ] data_center DataCenter enabled_features ClusterFeature[ ] Custom features that are enabled for the cluster. external_network_providers ExternalProvider[ ] A reference to the external network provider available in the cluster. gluster_hooks GlusterHook[ ] gluster_volumes GlusterVolume[ ] mac_pool MacPool A reference to the MAC pool used by this cluster. management_network Network network_filters NetworkFilter[ ] networks Network[ ] permissions Permission[ ] scheduling_policy SchedulingPolicy Reference to the default scheduling policy used by this cluster. 7.35.15. external_network_providers A reference to the external network provider available in the cluster. If the automatic deployment of the external network provider is supported, the networks of the referenced network provider are available on every host in the cluster. External network providers of a cluster can only be set during adding the cluster . This value may be overwritten for individual hosts during adding the host . 7.35.16. scheduling_policy Reference to the default scheduling policy used by this cluster. Note The scheduling policy properties are taken by default from the referenced scheduling policy, but they are overridden by the properties specified in the custom_scheduling_policy_properties attribute for this cluster. 7.36. ClusterFeature struct Type represents an additional feature that is available at a cluster level. Table 7.49. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.50. Links summary Name Type Summary cluster_level ClusterLevel Reference to the cluster level. 7.37. ClusterLevel struct Describes the capabilities supported by a specific cluster level. Table 7.51. Attributes summary Name Type Summary comment String Free text containing comments about this object. cpu_types CpuType[ ] The CPU types supported by this cluster level. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. permits Permit[ ] The permits supported by this cluster level. Table 7.52. Links summary Name Type Summary cluster_features ClusterFeature[ ] The additional features supported by this cluster level. 7.38. ClusterUpgradeAction enum The action type for cluster upgrade action. Table 7.53. Values summary Name Summary finish The upgrade action to be passed to finish the cluster upgrade process by marking the cluster's upgrade_running flag to false. start The upgrade action to be passed to start the cluster upgrade process by marking the cluster's upgrade_running flag to true. update_progress The upgrade action to be passed to update the cluster upgrade progress. 7.38.1. finish The upgrade action to be passed to finish the cluster upgrade process by marking the cluster's upgrade_running flag to false. This should be used at the end of the cluster upgrade process. 7.38.2. start The upgrade action to be passed to start the cluster upgrade process by marking the cluster's upgrade_running flag to true. This should used at the beginning of the cluster upgrade process. 7.38.3. update_progress The upgrade action to be passed to update the cluster upgrade progress. This should be used as the upgrade progresses. 7.39. Configuration struct Table 7.54. Attributes summary Name Type Summary data String The document describing the virtual machine. type ConfigurationType 7.39.1. data The document describing the virtual machine. Example of the OVF document: <?xml version='1.0' encoding='UTF-8'?> <ovf:Envelope xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ovf:version="3.5.0.0"> <References/> <Section xsi:type="ovf:NetworkSection_Type"> <Info>List of networks</Info> <Network ovf:name="Network 1"/> </Section> <Section xsi:type="ovf:DiskSection_Type"> <Info>List of Virtual Disks</Info> </Section> <Content ovf:id="out" xsi:type="ovf:VirtualSystem_Type"> <CreationDate>2014/12/03 04:25:45</CreationDate> <ExportDate>2015/02/09 14:12:24</ExportDate> <DeleteProtected>false</DeleteProtected> <SsoMethod>guest_agent</SsoMethod> <IsSmartcardEnabled>false</IsSmartcardEnabled> <TimeZone>Etc/GMT</TimeZone> <default_boot_sequence>0</default_boot_sequence> <Generation>1</Generation> <VmType>1</VmType> <MinAllocatedMem>1024</MinAllocatedMem> <IsStateless>false</IsStateless> <IsRunAndPause>false</IsRunAndPause> <AutoStartup>false</AutoStartup> <Priority>1</Priority> <CreatedByUserId>fdfc627c-d875-11e0-90f0-83df133b58cc</CreatedByUserId> <IsBootMenuEnabled>false</IsBootMenuEnabled> <IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled> <IsSpiceCopyPasteEnabled>true</IsSpiceCopyPasteEnabled> <Name>VM_export</Name> <TemplateId>00000000-0000-0000-0000-000000000000</TemplateId> <TemplateName>Blank</TemplateName> <IsInitilized>false</IsInitilized> <Origin>3</Origin> <DefaultDisplayType>1</DefaultDisplayType> <TrustedService>false</TrustedService> <OriginalTemplateId>00000000-0000-0000-0000-000000000000</OriginalTemplateId> <OriginalTemplateName>Blank</OriginalTemplateName> <UseLatestVersion>false</UseLatestVersion> <Section ovf:id="70b4d9a7-4f73-4def-89ca-24fc5f60e01a" ovf:required="false" xsi:type="ovf:OperatingSystemSection_Type"> <Info>Guest Operating System</Info> <Description>other</Description> </Section> <Section xsi:type="ovf:VirtualHardwareSection_Type"> <Info>1 CPU, 1024 Memory</Info> <System> <vssd:VirtualSystemType>ENGINE 3.5.0.0</vssd:VirtualSystemType> </System> <Item> <rasd:Caption>1 virtual cpu</rasd:Caption> <rasd:Description>Number of virtual CPU</rasd:Description> <rasd:InstanceId>1</rasd:InstanceId> <rasd:ResourceType>3</rasd:ResourceType> <rasd:num_of_sockets>1</rasd:num_of_sockets> <rasd:cpu_per_socket>1</rasd:cpu_per_socket> </Item> <Item> <rasd:Caption>1024 MB of memory</rasd:Caption> <rasd:Description>Memory Size</rasd:Description> <rasd:InstanceId>2</rasd:InstanceId> <rasd:ResourceType>4</rasd:ResourceType> <rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits> <rasd:VirtualQuantity>1024</rasd:VirtualQuantity> </Item> <Item> <rasd:Caption>USB Controller</rasd:Caption> <rasd:InstanceId>3</rasd:InstanceId> <rasd:ResourceType>23</rasd:ResourceType> <rasd:UsbPolicy>DISABLED</rasd:UsbPolicy> </Item> </Section> </Content> </ovf:Envelope> 7.40. ConfigurationType enum Configuration format types. Table 7.55. Values summary Name Summary ova ConfigurationType of type standard OVF. ovf ConfigurationType of type oVirt-compatible OVF. 7.40.1. ova ConfigurationType of type standard OVF. The provided virtual machine configuration conforms with the Open Virtualization Format (OVF) standard. This value should be used for an OVF configuration that is extracted from an Open Virtual Appliance (OVA) that was generated by oVirt or by other vendors. See the OVF specification . 7.40.2. ovf ConfigurationType of type oVirt-compatible OVF. The provided virtual machine configuration conforms with the oVirt-compatible form of the Open Virtualization Format (OVF). Note that the oVirt-compatible form of the OVF may differ from the OVF standard that is used by other vendors. This value should be used for an OVF configuration that is taken from a storage domain. 7.41. Console struct Representation for serial console device. Table 7.56. Attributes summary Name Type Summary enabled Boolean Enable/disable the serial console device. 7.42. Core struct Table 7.57. Attributes summary Name Type Summary index Integer socket Integer 7.43. Cpu struct Table 7.58. Attributes summary Name Type Summary architecture Architecture cores Core[ ] cpu_tune CpuTune level Integer mode CpuMode name String speed Decimal topology CpuTopology type String 7.44. CpuMode enum Table 7.59. Values summary Name Summary custom host_model host_passthrough 7.45. CpuPinningPolicy enum Type representing the CPU and NUMA pinning policy. Table 7.60. Values summary Name Summary dedicated The CPU pinning will be automatically calculated by the engine when a vm starts and it will be dropped when the vm stops. isolate_threads The CPU pinning will be automatically calculated by the engine when a vm starts, and it will be dropped when the vm stops. manual The CPU pinning will be manually configured. none The CPU pinning won't be configured. resize_and_pin_numa The CPU and NUMA pinning will be configured by the dedicated host. 7.45.1. dedicated The CPU pinning will be automatically calculated by the engine when a vm starts and it will be dropped when the vm stops. The pinning is exclusive, that means that no other VM can use the pinned physical CPU. 7.45.2. isolate_threads The CPU pinning will be automatically calculated by the engine when a vm starts, and it will be dropped when the vm stops. The pinning is exclusive, each virtual thread will get an exclusive physical core. That means that no other VM can use the pinned physical CPU. 7.45.3. manual The CPU pinning will be manually configured. Currently, this means that the CPU pinning will be manually configured to the current virtual machine configuration. The VM needs to be pinned to at least one host. The Pinning is provided within the CPU configuration, using CpuTune. 7.45.4. none The CPU pinning won't be configured. Currently, this means that the CPU pinning won't be configured to the current virtual machine configuration. By default, the VM topology is set with 1 Socket, 1 Core and 1 Thread. 7.45.5. resize_and_pin_numa The CPU and NUMA pinning will be configured by the dedicated host. The CPU and NUMA pinning will use the dedicated host CPU topology. The virtual machine configuration will automatically be set to fit the host to get the highest possible performance. 7.46. CpuProfile struct Table 7.61. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.62. Links summary Name Type Summary cluster Cluster permissions Permission[ ] qos Qos 7.47. CpuTopology struct Table 7.63. Attributes summary Name Type Summary cores Integer sockets Integer threads Integer 7.48. CpuTune struct Table 7.64. Attributes summary Name Type Summary vcpu_pins VcpuPin[ ] 7.49. CpuType struct Describes a supported CPU type. Table 7.65. Attributes summary Name Type Summary architecture Architecture The architecture of the CPU. level Integer The level of the CPU type. name String The name of the CPU type, for example Intel Nehalem Family . 7.50. CreationStatus enum Table 7.66. Values summary Name Summary complete failed in_progress pending 7.51. CustomProperty struct Custom property representation. Table 7.67. Attributes summary Name Type Summary name String Property name. regexp String A regular expression defining the available values a custom property can get. value String Property value. 7.52. DataCenter struct Table 7.68. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. local Boolean name String A human-readable name in plain text. quota_mode QuotaModeType status DataCenterStatus storage_format StorageFormat supported_versions Version[ ] version Version The compatibility version of the data center. 7.52.1. version The compatibility version of the data center. All clusters in this data center must already be set to at least this compatibility version. For example: Will respond: <data_center> ... <version> <major>4</major> <minor>0</minor> </version> ... </data_center> To update the compatibility version, use: With a request body: <data_center> <version> <major>4</major> <minor>1</minor> </version> </data_center> Table 7.69. Links summary Name Type Summary clusters Cluster[ ] Reference to clusters inside this data center. iscsi_bonds IscsiBond[ ] Reference to ISCSI bonds used by this data center. mac_pool MacPool Reference to the MAC pool used by this data center. networks Network[ ] Reference to networks attached to this data center. permissions Permission[ ] Reference to permissions assigned to this data center. qoss Qos[ ] Reference to quality of service used by this data center. quotas Quota[ ] Reference to quotas assigned to this data center. storage_domains StorageDomain[ ] Reference to storage domains attached to this data center. 7.53. DataCenterStatus enum Table 7.70. Values summary Name Summary contend maintenance not_operational problematic uninitialized up 7.54. Device struct A device wraps links to potential parents of a device. Table 7.71. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.72. Links summary Name Type Summary instance_type InstanceType Optionally references to an instance type the device is used by. template Template Optionally references to a template the device is used by. vm Vm Do not use this element, use vms instead. vms Vm[ ] References to the virtual machines that are using this device. 7.54.1. vms References to the virtual machines that are using this device. A device may be used by several virtual machines; for example, a shared disk my be used simultaneously by two or more virtual machines. 7.55. Disk struct Represents a virtual disk device. Table 7.73. Attributes summary Name Type Summary active Boolean Indicates if the disk is visible to the virtual machine. actual_size Integer The actual size of the disk, in bytes. alias String backup DiskBackup The backup behavior supported by the disk. backup_mode DiskBackupMode The type of the disk backup (full/incremental), visible only when the disk backup is in progress. bootable Boolean Indicates if the disk is marked as bootable. comment String Free text containing comments about this object. content_type DiskContentType Indicates the actual content residing on the disk. description String A human-readable description in plain text. external_disk String Use external disk. format DiskFormat The underlying storage format. id String A unique identifier. image_id String initial_size Integer The initial size of a sparse image disk created on block storage, in bytes. interface DiskInterface The type of interface driver used to connect the disk device to the virtual machine. logical_name String lun_storage HostStorage name String A human-readable name in plain text. propagate_errors Boolean Indicates if disk errors should cause virtual machine to be paused or if disk errors should be propagated to the the guest operating system instead. provisioned_size Integer The virtual size of the disk, in bytes. qcow_version QcowVersion The underlying QCOW version of a QCOW volume. read_only Boolean Indicates if the disk is in read-only mode. sgio ScsiGenericIO Indicates whether SCSI passthrough is enable and its policy. shareable Boolean Indicates if the disk can be attached to multiple virtual machines. sparse Boolean Indicates if the physical storage for the disk should not be preallocated. status DiskStatus The status of the disk device. storage_type DiskStorageType total_size Integer The total size of the disk including all of its snapshots, in bytes. uses_scsi_reservation Boolean wipe_after_delete Boolean Indicates if the disk's blocks will be read back as zeros after it is deleted: - On block storage, the disk will be zeroed and only then deleted. 7.55.1. active Indicates if the disk is visible to the virtual machine. Important When adding a disk attachment to a virtual machine, if the server accepts requests that do not contain this attribute the result is undefined. In some cases the disk will be automatically activated and in other cases it will not. To avoid issues it is strongly recommended to always include the this attribute with the desired value. 7.55.2. actual_size The actual size of the disk, in bytes. The actual size is the number of bytes actually used by the disk. It will be smaller than the provisioned size for disks that use the cow format. 7.55.3. bootable Indicates if the disk is marked as bootable. Important This attribute only makes sense for disks that are actually connected to virtual machines, and in version 4 of the API it has been moved to the DiskAttachment type. It is preserved here only for backwards compatibility, and it will be removed in the future. 7.55.4. external_disk Use external disk. An external disk can be a path to a local file or a block device, or a URL supported by QEMU such as: nbd:<host>:<port>[:exportname=<export>] nbd:unix:</path>[:exportname=<export>] http://[<username>[:<password>]@]<host>/<path> https://[<username>[:<password>]@]<host>/<path> ftp://[<username>[:<password>]@]<host>/<path> ftps://[<username>[:<password>]@]<host>/<path> See the QEMU manual for additional supported protocols and more info. 7.55.5. initial_size The initial size of a sparse image disk created on block storage, in bytes. The initial size is the number of bytes a sparse disk is initially allocated with when created on block storage. The initial size will be smaller than the provisioned size. If not specified the default initial size used by the system will be allocated. 7.55.6. interface The type of interface driver used to connect the disk device to the virtual machine. Important This attribute only makes sense for disks that are actually connected to virtual machines, and in version 4 of the API it has been moved to the DiskAttachment type. It is preserved here only for backwards compatibility, and it will be removed in the future. 7.55.7. provisioned_size The virtual size of the disk, in bytes. This attribute is mandatory when creating a new disk. 7.55.8. qcow_version The underlying QCOW version of a QCOW volume. The QCOW version specifies to the qemu which qemu version the volume supports. This field can be updated using the update API and will be reported only for QCOW volumes. It is determined by the version of the storage domain that the disk is created on. Storage domains with a version lower than V4 support QCOW2 volumes. V4 storage domains also support QCOW2v3. For more information about features of the different QCOW versions, see QCOW3 . 7.55.9. read_only Indicates if the disk is in read-only mode. Since version 4.0 this attribute is not shown in the API and was moved to DiskAttachment . Since version 4.1.2 of Red Hat Virtualization Manager this attribute is deprecated, and it will be removed in the future. In order to attach a disk in read only mode use the read_only attribute of the DiskAttachment type. For example: <disk_attachment> <read_only>true</read_only> ... </disk_attachment> 7.55.10. sgio Indicates whether SCSI passthrough is enable and its policy. Setting a value of filtered / unfiltered will enable SCSI passthrough for a LUN disk with unprivileged/privileged SCSI I/O. To disable SCSI passthrough the value should be set to disabled 7.55.11. shareable Indicates if the disk can be attached to multiple virtual machines. Important When a disk is attached to multiple virtual machines it is the responsibility of the guest operating systems of those virtual machines to coordinate access to it, to avoid corruption of the data, for example using a shared file system like GlusterFS or GFS . 7.55.12. total_size The total size of the disk including all of its snapshots, in bytes. The total size is the number of bytes actually used by the disk plus the size of its snapshots. It won't be populated for direct LUN and Cinder disks. For disks without snapshots the total size is equal to the actual size. 7.55.13. wipe_after_delete Indicates if the disk's blocks will be read back as zeros after it is deleted: On block storage, the disk will be zeroed and only then deleted. On file storage, since the file system already guarantees that previously removed blocks are read back as zeros, the disk will be deleted immediately. Table 7.74. Links summary Name Type Summary disk_profile DiskProfile disk_snapshots DiskSnapshot[ ] instance_type InstanceType Optionally references to an instance type the device is used by. openstack_volume_type OpenStackVolumeType permissions Permission[ ] quota Quota snapshot Snapshot statistics Statistic[ ] Statistics exposed by the disk. storage_domain StorageDomain storage_domains StorageDomain[ ] The storage domains associated with this disk. template Template Optionally references to a template the device is used by. vm Vm Do not use this element, use vms instead. vms Vm[ ] References to the virtual machines that are using this device. 7.55.14. statistics Statistics exposed by the disk. For example: <statistics> <statistic href="/ovirt-engine/api/disks/123/statistics/456" id="456"> <name>data.current.read</name> <description>Read data rate</description> <kind>gauge</kind> <type>decimal</type> <unit>bytes_per_second</unit> <values> <value> <datum>1052</datum> </value> </values> <disk href="/ovirt-engine/api/disks/123" id="123"/> </statistic> ... </statistics> These statistics are not directly included when the disk is retrieved, only a link. To obtain the statistics follow the included link: 7.55.15. storage_domains The storage domains associated with this disk. Note Only required when the first disk is being added to a virtual machine that was not itself created from a template. 7.55.16. vms References to the virtual machines that are using this device. A device may be used by several virtual machines; for example, a shared disk my be used simultaneously by two or more virtual machines. 7.56. DiskAttachment struct Describes how a disk is attached to a virtual machine. Table 7.75. Attributes summary Name Type Summary active Boolean Defines whether the disk is active in the virtual machine it's attached to. bootable Boolean Defines whether the disk is bootable. comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. interface DiskInterface The type of interface driver used to connect the disk device to the virtual machine. logical_name String The logical name of the virtual machine's disk, as seen from inside the virtual machine. name String A human-readable name in plain text. pass_discard Boolean Defines whether the virtual machine passes discard commands to the storage. read_only Boolean Indicates whether the disk is connected to the virtual machine as read only. uses_scsi_reservation Boolean Defines whether SCSI reservation is enabled for this disk. 7.56.1. active Defines whether the disk is active in the virtual machine it's attached to. A disk attached to a virtual machine in an active status is connected to the virtual machine at run time and can be used. 7.56.2. logical_name The logical name of the virtual machine's disk, as seen from inside the virtual machine. The logical name of a disk is reported only when the guest agent is installed and running inside the virtual machine. For example, if the guest operating system is Linux and the disk is connected via a VirtIO interface, the logical name will be reported as /dev/vda : <disk_attachment> ... <logical_name>/dev/vda</logical_name> </disk_attachment> If the guest operating system is Windows, the logical name will be reported as \\.\PHYSICALDRIVE0 . 7.56.3. read_only Indicates whether the disk is connected to the virtual machine as read only. When adding a new disk attachment the default value is false . <disk_attachment> ... <read_only>true</read_only> </disk_attachment> 7.56.4. uses_scsi_reservation Defines whether SCSI reservation is enabled for this disk. Virtual machines with VIRTIO-SCSI passthrough enabled can set persistent SCSI reservations on disks. If they set persistent SCSI reservations, those virtual machines cannot be migrated to a different host because they would lose access to the disk, because SCSI reservations are specific to SCSI initiators, and therefore hosts. This scenario cannot be automatically detected. To avoid migrating these virtual machines, the user can set this attribute to true , to indicate the virtual machine is using SCSI reservations. Table 7.76. Links summary Name Type Summary disk Disk The reference to the disk. template Template The reference to the template. vm Vm The reference to the virtual machine. 7.57. DiskBackup enum Represents an enumeration of the backup mechanism that is enabled on the disk. Table 7.77. Values summary Name Summary incremental Incremental backup support. none No backup support. 7.58. DiskBackupMode enum Represents an enumeration of backup modes Table 7.78. Values summary Name Summary full This disk supports full backup. incremental This disk supports incremental backup. 7.58.1. full This disk supports full backup. You can query zero extents and download all disk data. 7.58.2. incremental This disk supports incremental backup. You can query dirty extents and download changed blocks. 7.59. DiskContentType enum The actual content residing on the disk. Table 7.79. Values summary Name Summary backup_scratch The disk contains protected VM backup data. data The disk contains data. hosted_engine The disk contains the Hosted Engine VM disk. hosted_engine_configuration The disk contains the Hosted Engine configuration disk. hosted_engine_metadata The disk contains the Hosted Engine metadata disk. hosted_engine_sanlock The disk contains the Hosted Engine Sanlock disk. iso The disk contains an ISO image to be used a CDROM device. memory_dump_volume The disk contains a memory dump from a live snapshot. memory_metadata_volume The disk contains memory metadata from a live snapshot. ovf_store The disk is an OVF store. 7.60. DiskFormat enum The underlying storage format of disks. Table 7.80. Values summary Name Summary cow The Copy On Write format allows snapshots, with a small performance overhead. raw The raw format does not allow snapshots, but offers improved performance. 7.61. DiskInterface enum The underlying storage interface of disks communication with controller. Table 7.81. Values summary Name Summary ide Legacy controller device. sata SATA controller device. spapr_vscsi Para-virtualized device supported by the IBM pSeries family of machines, using the SCSI protocol. virtio Virtualization interface where just the guest's device driver knows it is running in a virtual environment. virtio_scsi Para-virtualized SCSI controller device. 7.61.1. ide Legacy controller device. Works with almost all guest operating systems, so it is good for compatibility. Performance is lower than with the other alternatives. 7.61.2. virtio Virtualization interface where just the guest's device driver knows it is running in a virtual environment. Enables guests to get high performance disk operations. 7.61.3. virtio_scsi Para-virtualized SCSI controller device. Fast interface with the guest via direct physical storage device address, using the SCSI protocol. 7.62. DiskProfile struct Table 7.82. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.83. Links summary Name Type Summary permissions Permission[ ] qos Qos storage_domain StorageDomain 7.63. DiskSnapshot struct Table 7.84. Attributes summary Name Type Summary active Boolean Indicates if the disk is visible to the virtual machine. actual_size Integer The actual size of the disk, in bytes. alias String backup DiskBackup The backup behavior supported by the disk. backup_mode DiskBackupMode The type of the disk backup (full/incremental), visible only when the disk backup is in progress. bootable Boolean Indicates if the disk is marked as bootable. comment String Free text containing comments about this object. content_type DiskContentType Indicates the actual content residing on the disk. description String A human-readable description in plain text. external_disk String Use external disk. format DiskFormat The underlying storage format. id String A unique identifier. image_id String initial_size Integer The initial size of a sparse image disk created on block storage, in bytes. interface DiskInterface The type of interface driver used to connect the disk device to the virtual machine. logical_name String lun_storage HostStorage name String A human-readable name in plain text. propagate_errors Boolean Indicates if disk errors should cause virtual machine to be paused or if disk errors should be propagated to the the guest operating system instead. provisioned_size Integer The virtual size of the disk, in bytes. qcow_version QcowVersion The underlying QCOW version of a QCOW volume. read_only Boolean Indicates if the disk is in read-only mode. sgio ScsiGenericIO Indicates whether SCSI passthrough is enable and its policy. shareable Boolean Indicates if the disk can be attached to multiple virtual machines. sparse Boolean Indicates if the physical storage for the disk should not be preallocated. status DiskStatus The status of the disk device. storage_type DiskStorageType total_size Integer The total size of the disk including all of its snapshots, in bytes. uses_scsi_reservation Boolean wipe_after_delete Boolean Indicates if the disk's blocks will be read back as zeros after it is deleted: - On block storage, the disk will be zeroed and only then deleted. 7.63.1. active Indicates if the disk is visible to the virtual machine. Important When adding a disk attachment to a virtual machine, if the server accepts requests that do not contain this attribute the result is undefined. In some cases the disk will be automatically activated and in other cases it will not. To avoid issues it is strongly recommended to always include the this attribute with the desired value. 7.63.2. actual_size The actual size of the disk, in bytes. The actual size is the number of bytes actually used by the disk. It will be smaller than the provisioned size for disks that use the cow format. 7.63.3. bootable Indicates if the disk is marked as bootable. Important This attribute only makes sense for disks that are actually connected to virtual machines, and in version 4 of the API it has been moved to the DiskAttachment type. It is preserved here only for backwards compatibility, and it will be removed in the future. 7.63.4. external_disk Use external disk. An external disk can be a path to a local file or a block device, or a URL supported by QEMU such as: nbd:<host>:<port>[:exportname=<export>] nbd:unix:</path>[:exportname=<export>] http://[<username>[:<password>]@]<host>/<path> https://[<username>[:<password>]@]<host>/<path> ftp://[<username>[:<password>]@]<host>/<path> ftps://[<username>[:<password>]@]<host>/<path> See the QEMU manual for additional supported protocols and more info. 7.63.5. initial_size The initial size of a sparse image disk created on block storage, in bytes. The initial size is the number of bytes a sparse disk is initially allocated with when created on block storage. The initial size will be smaller than the provisioned size. If not specified the default initial size used by the system will be allocated. 7.63.6. interface The type of interface driver used to connect the disk device to the virtual machine. Important This attribute only makes sense for disks that are actually connected to virtual machines, and in version 4 of the API it has been moved to the DiskAttachment type. It is preserved here only for backwards compatibility, and it will be removed in the future. 7.63.7. provisioned_size The virtual size of the disk, in bytes. This attribute is mandatory when creating a new disk. 7.63.8. qcow_version The underlying QCOW version of a QCOW volume. The QCOW version specifies to the qemu which qemu version the volume supports. This field can be updated using the update API and will be reported only for QCOW volumes. It is determined by the version of the storage domain that the disk is created on. Storage domains with a version lower than V4 support QCOW2 volumes. V4 storage domains also support QCOW2v3. For more information about features of the different QCOW versions, see QCOW3 . 7.63.9. read_only Indicates if the disk is in read-only mode. Since version 4.0 this attribute is not shown in the API and was moved to DiskAttachment . Since version 4.1.2 of Red Hat Virtualization Manager this attribute is deprecated, and it will be removed in the future. In order to attach a disk in read only mode use the read_only attribute of the DiskAttachment type. For example: <disk_attachment> <read_only>true</read_only> ... </disk_attachment> 7.63.10. sgio Indicates whether SCSI passthrough is enable and its policy. Setting a value of filtered / unfiltered will enable SCSI passthrough for a LUN disk with unprivileged/privileged SCSI I/O. To disable SCSI passthrough the value should be set to disabled 7.63.11. shareable Indicates if the disk can be attached to multiple virtual machines. Important When a disk is attached to multiple virtual machines it is the responsibility of the guest operating systems of those virtual machines to coordinate access to it, to avoid corruption of the data, for example using a shared file system like GlusterFS or GFS . 7.63.12. total_size The total size of the disk including all of its snapshots, in bytes. The total size is the number of bytes actually used by the disk plus the size of its snapshots. It won't be populated for direct LUN and Cinder disks. For disks without snapshots the total size is equal to the actual size. 7.63.13. wipe_after_delete Indicates if the disk's blocks will be read back as zeros after it is deleted: On block storage, the disk will be zeroed and only then deleted. On file storage, since the file system already guarantees that previously removed blocks are read back as zeros, the disk will be deleted immediately. Table 7.85. Links summary Name Type Summary disk Disk disk_profile DiskProfile disk_snapshots DiskSnapshot[ ] instance_type InstanceType Optionally references to an instance type the device is used by. openstack_volume_type OpenStackVolumeType parent DiskSnapshot Parent disk snapshot. permissions Permission[ ] quota Quota snapshot Snapshot statistics Statistic[ ] Statistics exposed by the disk. storage_domain StorageDomain storage_domains StorageDomain[ ] The storage domains associated with this disk. template Template Optionally references to a template the device is used by. vm Vm Do not use this element, use vms instead. vms Vm[ ] References to the virtual machines that are using this device. 7.63.14. statistics Statistics exposed by the disk. For example: <statistics> <statistic href="/ovirt-engine/api/disks/123/statistics/456" id="456"> <name>data.current.read</name> <description>Read data rate</description> <kind>gauge</kind> <type>decimal</type> <unit>bytes_per_second</unit> <values> <value> <datum>1052</datum> </value> </values> <disk href="/ovirt-engine/api/disks/123" id="123"/> </statistic> ... </statistics> These statistics are not directly included when the disk is retrieved, only a link. To obtain the statistics follow the included link: 7.63.15. storage_domains The storage domains associated with this disk. Note Only required when the first disk is being added to a virtual machine that was not itself created from a template. 7.63.16. vms References to the virtual machines that are using this device. A device may be used by several virtual machines; for example, a shared disk my be used simultaneously by two or more virtual machines. 7.64. DiskStatus enum Current status representation for disk. Table 7.86. Values summary Name Summary illegal Disk cannot be accessed by the virtual machine, and the user needs to take action to resolve the issue. locked The disk is being used by the system, therefore it cannot be accessed by virtual machines at this point. ok The disk status is normal and can be accessed by the virtual machine. 7.64.1. locked The disk is being used by the system, therefore it cannot be accessed by virtual machines at this point. This is usually a temporary status, until the disk is freed. 7.65. DiskStorageType enum Table 7.87. Values summary Name Summary cinder image lun managed_block_storage A storage type, used for a storage domain that was created using a cinderlib driver. 7.66. DiskType enum Table 7.88. Values summary Name Summary data system 7.67. Display struct Represents a graphic console configuration. Table 7.89. Attributes summary Name Type Summary address String The IP address of the guest to connect the graphic console client to. allow_override Boolean Indicates if to override the display address per host. certificate Certificate The TLS certificate in case of a TLS connection. copy_paste_enabled Boolean Indicates whether a user is able to copy and paste content from an external host into the graphic console. disconnect_action String Returns the action that will take place when the graphic console is disconnected. disconnect_action_delay Integer Delay (in minutes) before the graphic console disconnect action is carried out. file_transfer_enabled Boolean Indicates if a user is able to drag and drop files from an external host into the graphic console. keyboard_layout String The keyboard layout to use with this graphic console. monitors Integer The number of monitors opened for this graphic console. port Integer The port address on the guest to connect the graphic console client to. proxy String The proxy IP which will be used by the graphic console client to connect to the guest. secure_port Integer The secured port address on the guest, in case of using TLS, to connect the graphic console client to. single_qxl_pci Boolean The engine now sets it automatically according to the operating system. smartcard_enabled Boolean Indicates if to use smart card authentication. type DisplayType The graphic console protocol type. 7.67.1. allow_override Indicates if to override the display address per host. Relevant only for the Host.display attribute. If set, the graphical console address of a virtual machine will be overridden by the host specified display address. if not set, the graphical console address of a virtual machine will not be overridden. 7.67.2. certificate The TLS certificate in case of a TLS connection. If TLS isn't enabled then it won't be reported. 7.67.3. copy_paste_enabled Indicates whether a user is able to copy and paste content from an external host into the graphic console. This option is only available for the SPICE console type. 7.67.4. disconnect_action Returns the action that will take place when the graphic console is disconnected. The options are: none No action is taken. lock_screen Locks the currently active user session. logout Logs out the currently active user session. reboot Initiates a graceful virtual machine reboot. shutdown Initiates a graceful virtual machine shutdown. This option is only available for the SPICE console type. 7.67.5. disconnect_action_delay Delay (in minutes) before the graphic console disconnect action is carried out. This option is only available for Shutdown disconnect action. 7.67.6. file_transfer_enabled Indicates if a user is able to drag and drop files from an external host into the graphic console. This option is only available for the SPICE console type. 7.67.7. keyboard_layout The keyboard layout to use with this graphic console. This option is only available for the VNC console type. If no keyboard is enabled then it won't be reported. 7.67.8. monitors The number of monitors opened for this graphic console. This option is only available for the SPICE console type. Possible values are 1, 2 or 4. 7.67.9. proxy The proxy IP which will be used by the graphic console client to connect to the guest. It is useful when the client is outside the guest's network. This option is only available for the SPICE console type. This proxy can be set in global configuration, cluster level, virtual machine pool level or disabled per virtual machine. If the proxy is set in any of this mentioned places and not disabled for the virtual machine, it will be returned by this method. If the proxy is not set, nothing will be reported. 7.67.10. secure_port The secured port address on the guest, in case of using TLS, to connect the graphic console client to. If TLS isn't enabled then it won't be reported. 7.67.11. single_qxl_pci The engine now sets it automatically according to the operating system. Therefore, it has been deprecated since 4.4.5. Indicates if to use one PCI slot for each monitor or to use a single PCI channel for all multiple monitors. This option is only available for the SPICE console type and only for connecting a guest Linux based OS. 7.67.12. smartcard_enabled Indicates if to use smart card authentication. This option is only available for the SPICE console type. 7.68. DisplayType enum Represents an enumeration of the protocol used to connect to the graphic console of the virtual machine. Table 7.90. Values summary Name Summary spice Display of type SPICE. vnc Display of type VNC. 7.68.1. spice Display of type SPICE. See SPICE documentation for more details. 7.68.2. vnc Display of type VNC. VNC stands for Virtual Network Computing, and it is a graphical desktop sharing system that uses RFB (Remote Frame Buffer) protocol to remotely control another machine. 7.69. Dns struct Represents the DNS resolver configuration. Table 7.91. Attributes summary Name Type Summary search_domains Host[ ] Array of hosts serving as search domains. servers Host[ ] Array of hosts serving as DNS servers. 7.70. DnsResolverConfiguration struct Represents the DNS resolver configuration. Table 7.92. Attributes summary Name Type Summary name_servers String[ ] Array of addresses of name servers. 7.70.1. name_servers Array of addresses of name servers. Either IPv4 or IPv6 addresses may be specified. 7.71. Domain struct This type represents a directory service domain. Table 7.93. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. user User Table 7.94. Links summary Name Type Summary groups Group[ ] A reference to all groups in the directory service. users User[ ] A reference to a list of all users in the directory service. 7.71.1. users A reference to a list of all users in the directory service. This information is used to add new users to the Red Hat Virtualization environment. 7.72. DynamicCpu struct Configuration of the Dynamic CPUs of a virtual machine. Table 7.95. Attributes summary Name Type Summary cpu_tune CpuTune topology CpuTopology 7.73. EntityExternalStatus enum Type representing an external entity status. Table 7.96. Values summary Name Summary error The external entity status is erroneous. failure The external entity has an issue that causes failures. info There external entity status is okay but with some information that might be relevant. ok The external entity status is okay. warning The external entity status is okay but with an issue that might require attention. 7.73.1. error The external entity status is erroneous. This might require a moderate attention. 7.73.2. failure The external entity has an issue that causes failures. This might require immediate attention. 7.74. EntityProfileDetail struct Table 7.97. Attributes summary Name Type Summary profile_details ProfileDetail[ ] 7.75. ErrorHandling struct Table 7.98. Attributes summary Name Type Summary on_error MigrateOnError 7.76. Event struct Type representing an event. Table 7.99. Attributes summary Name Type Summary code Integer The event code. comment String Free text containing comments about this object. correlation_id String The event correlation identifier. custom_data String Free text representing custom event data. custom_id Integer A custom event identifier. description String A human-readable description in plain text. flood_rate Integer Defines the flood rate. id String A unique identifier. index Integer The numeric index of this event. log_on_host Boolean Specifies whether the event should also be written to the USD{hypervisor. name String A human-readable name in plain text. origin String Free text identifying the origin of the event. severity LogSeverity The event severity. time Date The event time. 7.76.1. correlation_id The event correlation identifier. Used in order to correlate several events together. 7.76.2. flood_rate Defines the flood rate. This prevents flooding in case an event appeared more than once in the defined rate. Defaults is 30 seconds. 7.76.3. index The numeric index of this event. The indexes of events are always increasing, so events with higher indexes are guaranteed to be older than events with lower indexes. Important In the current implementation of the engine, the id attribute has the same value as this index attribute. That is an implementation detail that the user of the API should not rely on. In the future the id attribute may be changed to an arbitrary string, containing non numeric characters and no implicit order. On the other hand this index attribute is guaranteed to stay as integer and ordered. 7.76.4. log_on_host Specifies whether the event should also be written to the USD{hypervisor.name} log. If no host is specified the event description will be written to all hosts. Default is false. Table 7.100. Links summary Name Type Summary cluster Cluster Reference to the cluster service. data_center DataCenter Reference to the data center service. host Host Reference to the host service. storage_domain StorageDomain Reference to the storage domain service. template Template Reference to the template service. user User Reference to the user service. vm Vm Reference to the virtual machine service. 7.76.5. cluster Reference to the cluster service. Event can be associated with a cluster. 7.76.6. data_center Reference to the data center service. Event can be associated with a data center. 7.76.7. host Reference to the host service. Event can be associated with a host. 7.76.8. storage_domain Reference to the storage domain service. Event can be associated with a storage domain. 7.76.9. template Reference to the template service. Event can be associated with a template. 7.76.10. user Reference to the user service. Event can be associated with a user. 7.76.11. vm Reference to the virtual machine service. Event can be associated with a virtual machine. 7.77. EventSubscription struct Table 7.101. Attributes summary Name Type Summary address String The email address to which notifications should be sent. comment String Free text containing comments about this object. description String A human-readable description in plain text. event NotifiableEvent The subscribed-for event. id String A unique identifier. name String A human-readable name in plain text. notification_method NotificationMethod The notification method: SMTP or SNMP. user User The subscribing user. 7.77.1. address The email address to which notifications should be sent. When not provided, notifications are sent to the user's email. Only a single address per user is currently supported. If a subscription with a different email address to that of existing subscriptions is added, a 409 (CONFLICT) status is returned with an explanation that the provided address conflicts with an existing address of an event-subscription for this user. This field might be deprecated in the future, and notifications will always be sent on the user's email address. 7.77.2. event The subscribed-for event. (Combined with the user, Uniquely identifies the event-subscription). 7.77.3. notification_method The notification method: SMTP or SNMP. Currently only SMTP supported by API. Support for SNMP will be added in the future. 7.77.4. user The subscribing user. Combined with the event-name, uniquely identifies the event-subscription. 7.78. ExternalComputeResource struct Table 7.102. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. provider String url String user String Table 7.103. Links summary Name Type Summary external_host_provider ExternalHostProvider 7.79. ExternalDiscoveredHost struct Table 7.104. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. ip String last_report String mac String name String A human-readable name in plain text. subnet_name String Table 7.105. Links summary Name Type Summary external_host_provider ExternalHostProvider 7.80. ExternalHost struct Represents a host provisioned by a host provider (such as Foreman/Satellite). See Foreman documentation for more details. See Satellite documentation for more details on Red Hat Satellite. Table 7.106. Attributes summary Name Type Summary address String The address of the host, either IP address of FQDN (Fully Qualified Domain Name). comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.107. Links summary Name Type Summary external_host_provider ExternalHostProvider A reference to the external host provider that the host is managed by. 7.81. ExternalHostGroup struct Table 7.108. Attributes summary Name Type Summary architecture_name String comment String Free text containing comments about this object. description String A human-readable description in plain text. domain_name String id String A unique identifier. name String A human-readable name in plain text. operating_system_name String subnet_name String Table 7.109. Links summary Name Type Summary external_host_provider ExternalHostProvider 7.82. ExternalHostProvider struct Represents an external host provider, such as Foreman or Satellite. See Foreman documentation for more details. See Satellite documentation for more details on Red Hat Satellite. Table 7.110. Attributes summary Name Type Summary authentication_url String Defines the external provider authentication URL address. comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. password String Defines password for the user during the authentication process. properties Property[ ] Array of provider name/value properties. requires_authentication Boolean Defines whether provider authentication is required or not. url String Defines URL address of the external provider. username String Defines user name to be used during authentication process. 7.82.1. requires_authentication Defines whether provider authentication is required or not. If authentication is required, both username and password attributes will be used during authentication. Table 7.111. Links summary Name Type Summary certificates Certificate[ ] A reference to the certificates the engine supports for this provider. compute_resources ExternalComputeResource[ ] A reference to the compute resource as represented in the host provider. discovered_hosts ExternalDiscoveredHost[ ] A reference to the discovered hosts in the host provider. host_groups ExternalHostGroup[ ] A reference to the host groups in the host provider. hosts Host[ ] A reference to the hosts provisioned by the host provider. 7.82.2. compute_resources A reference to the compute resource as represented in the host provider. Each host provider optionally has the engine defined as a compute resource, which allows to create virtual machines in the engine. This compute resource details are used in the Bare-Metal provisioning use-case, in order to deploy the hypervisor. 7.82.3. discovered_hosts A reference to the discovered hosts in the host provider. Discovered hosts are hosts that were not provisioned yet. 7.82.4. host_groups A reference to the host groups in the host provider. Host group contains different properties that the host provider applies on all hosts that are member of this group. Such as installed software, system definitions, passwords and more. 7.83. ExternalNetworkProviderConfiguration struct Describes how an external network provider is provisioned on a host. Table 7.112. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.113. Links summary Name Type Summary external_network_provider ExternalProvider Link to the external network provider. host Host Link to the host. 7.84. ExternalProvider struct Represents an external provider. Table 7.114. Attributes summary Name Type Summary authentication_url String Defines the external provider authentication URL address. comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. password String Defines password for the user during the authentication process. properties Property[ ] Array of provider name/value properties. requires_authentication Boolean Defines whether provider authentication is required or not. url String Defines URL address of the external provider. username String Defines user name to be used during authentication process. 7.84.1. requires_authentication Defines whether provider authentication is required or not. If authentication is required, both username and password attributes will be used during authentication. 7.85. ExternalStatus enum Represents an external status. This status is currently used for hosts and storage domains , and allows an external system to update status of objects it is aware of. Table 7.115. Values summary Name Summary error Error status. failure Failure status. info Info status. ok OK status. warning Warning status. 7.85.1. error Error status. There is some kind of error in the relevant object. 7.85.2. failure Failure status. The relevant object is failing. 7.85.3. info Info status. The relevant object is in OK status, but there is an information available that might be relevant for the administrator. 7.85.4. ok OK status. The relevant object is working well. 7.85.5. warning Warning status. The relevant object is working well, but there is some warning that might be relevant for the administrator. 7.86. ExternalSystemType enum Represents the type of the external system that is associated with the step . Table 7.116. Values summary Name Summary gluster Represents Gluster as the external system which is associated with the step . vdsm Represents VDSM as the external system which is associated with the step . 7.87. ExternalTemplateImport struct Describes the parameters for the template import operation from an external system. Currently supports OVA only. Table 7.117. Attributes summary Name Type Summary clone Boolean Optional. url String The URL to be passed to the engine. 7.87.1. clone Optional. Indicates if the identifiers of the imported template should be regenerated. By default when a template is imported the identifiers are preserved. This means that the same template can't be imported multiple times, as that identifiers needs to be unique. To allow importing the same template multiple times set this parameter to true , as the default is false . 7.87.2. url The URL to be passed to the engine. Example: Table 7.118. Links summary Name Type Summary cluster Cluster Specifies the target cluster for the resulting template. cpu_profile CpuProfile Optional. host Host Specifies the host that the OVA file exists on. quota Quota Optional. storage_domain StorageDomain Specifies the target storage domain for disks. template Template The template entity used to specify a name for the newly created template. 7.87.3. cpu_profile Optional. Specifies the CPU profile of the resulting template. 7.87.4. quota Optional. Specifies the quota that will be applied to the resulting template. 7.87.5. template The template entity used to specify a name for the newly created template. If a name is not specified, the source template name will be used. 7.88. ExternalVmImport struct Describes the parameters for the virtual machine import operation from an external system. Table 7.119. Attributes summary Name Type Summary name String The name of the virtual machine to be imported, as is defined within the external system. password String The password to authenticate against the external hypervisor system. provider ExternalVmProviderType The type of external virtual machine provider. sparse Boolean Optional. url String The URL to be passed to the virt-v2v tool for conversion. username String The username to authenticate against the external hypervisor system. 7.88.1. sparse Optional. Specifies the disk allocation policy of the resulting virtual machine: true for sparse, false for preallocated. If not specified: - When importing an OVA that was produced by oVirt, it will be determined according to the configuration of the disk within the OVF. - Otherwise, it will be set to true. 7.88.2. url The URL to be passed to the virt-v2v tool for conversion. Example: More examples can be found at http://libguestfs.org/virt-v2v.1.html . Table 7.120. Links summary Name Type Summary cluster Cluster Specifies the target cluster for the resulting virtual machine. cpu_profile CpuProfile Optional. drivers_iso File Optional. host Host Optional. quota Quota Optional. storage_domain StorageDomain Specifies the target storage domain for converted disks. vm Vm The virtual machine entity used to specify a name for the newly created virtual machine. 7.88.3. cpu_profile Optional. Specifies the CPU profile of the resulting virtual machine. 7.88.4. drivers_iso Optional. The name of the ISO containing drivers that can be used during the virt-v2v conversion process. 7.88.5. host Optional. Specifies the host (using host's ID) to be used for the conversion process. If not specified, one is selected automatically. 7.88.6. quota Optional. Specifies the quota that will be applied to the resulting virtual machine. 7.88.7. vm The virtual machine entity used to specify a name for the newly created virtual machine. If a name is not specified, the source virtual machine name will be used. 7.89. ExternalVmProviderType enum Describes the type of external hypervisor system. Table 7.121. Values summary Name Summary kvm vmware xen 7.90. Fault struct Table 7.122. Attributes summary Name Type Summary detail String reason String 7.91. FenceType enum Type representing the type of the fence operation. Table 7.123. Values summary Name Summary manual Manual host fencing via power management. restart Restart the host via power management. start Start the host via power management. status Check the host power status via power management. stop Stop the host via power management. 7.92. FencingPolicy struct Type representing a cluster fencing policy. Table 7.124. Attributes summary Name Type Summary enabled Boolean Enable or disable fencing on this cluster. skip_if_connectivity_broken SkipIfConnectivityBroken If enabled, we will not fence a host in case more than a configurable percentage of hosts in the cluster lost connectivity as well. skip_if_gluster_bricks_up Boolean A flag indicating if fencing should be skipped if Gluster bricks are up and running in the host being fenced. skip_if_gluster_quorum_not_met Boolean A flag indicating if fencing should be skipped if Gluster bricks are up and running and Gluster quorum will not be met without those bricks. skip_if_sd_active SkipIfSdActive If enabled, we will skip fencing in case the host maintains its lease in the storage. 7.92.1. skip_if_connectivity_broken If enabled, we will not fence a host in case more than a configurable percentage of hosts in the cluster lost connectivity as well. This comes to prevent fencing storm in cases where there is a global networking issue in the cluster. 7.92.2. skip_if_gluster_bricks_up A flag indicating if fencing should be skipped if Gluster bricks are up and running in the host being fenced. This flag is optional, and the default value is false . 7.92.3. skip_if_gluster_quorum_not_met A flag indicating if fencing should be skipped if Gluster bricks are up and running and Gluster quorum will not be met without those bricks. This flag is optional, and the default value is false . 7.92.4. skip_if_sd_active If enabled, we will skip fencing in case the host maintains its lease in the storage. It means that if the host still has storage access then it won't get fenced. 7.93. File struct Table 7.125. Attributes summary Name Type Summary comment String Free text containing comments about this object. content String description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. type String Table 7.126. Links summary Name Type Summary storage_domain StorageDomain 7.94. Filter struct Table 7.127. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. position Integer Table 7.128. Links summary Name Type Summary scheduling_policy_unit SchedulingPolicyUnit 7.95. FipsMode enum Representation of the FIPS mode to the cluster. Table 7.129. Values summary Name Summary disabled The FIPS mode is disabled. enabled The FIPS mode is enabled. undefined The FIPS mode is not yet evaluated. 7.95.1. disabled The FIPS mode is disabled. Its implication is that the FIPS mode is disabled and the hosts within should be with FIPS mode disabled, otherwise they would be non-operational. 7.95.2. enabled The FIPS mode is enabled. Its implication is that the FIPS mode is enabled and the hosts within should be with FIPS mode enabled, otherwise they should be non-operational. 7.95.3. undefined The FIPS mode is not yet evaluated. Currently, its implication is that the FIPS mode is undetermined. Once a host is added, this value will switch according to the host settings. 7.96. FirewallType enum Describes all firewall types supported by the system. Table 7.130. Values summary Name Summary firewalld FirewallD firewall type. iptables IPTables firewall type. 7.96.1. firewalld FirewallD firewall type. When a cluster has the firewall type set to firewalld , the firewalls of all hosts in the cluster will be configured using firewalld . FirewallD replaced IPTables in version 4.2. It simplifies configuration using a command line program and dynamic configuration. 7.96.2. iptables IPTables firewall type. iptables is deprecated. 7.97. Floppy struct The underlying representation of a floppy file. Table 7.131. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. file File File object that represent the Floppy device's content and its type. id String A unique identifier. name String A human-readable name in plain text. Table 7.132. Links summary Name Type Summary instance_type InstanceType Optionally references to an instance type the device is used by. template Template Optionally references to a template the device is used by. vm Vm Do not use this element, use vms instead. vms Vm[ ] References to the virtual machines that are using this device. 7.97.1. vms References to the virtual machines that are using this device. A device may be used by several virtual machines; for example, a shared disk my be used simultaneously by two or more virtual machines. 7.98. FopStatistic struct Table 7.133. Attributes summary Name Type Summary name String statistics Statistic[ ] 7.99. GlusterBrick struct Table 7.134. Attributes summary Name Type Summary brick_dir String comment String Free text containing comments about this object. description String A human-readable description in plain text. device String fs_name String gluster_clients GlusterClient[ ] id String A unique identifier. memory_pools GlusterMemoryPool[ ] mnt_options String name String A human-readable name in plain text. pid Integer port Integer server_id String status GlusterBrickStatus Table 7.135. Links summary Name Type Summary gluster_volume GlusterVolume instance_type InstanceType Optionally references to an instance type the device is used by. statistics Statistic[ ] template Template Optionally references to a template the device is used by. vm Vm Do not use this element, use vms instead. vms Vm[ ] References to the virtual machines that are using this device. 7.99.1. vms References to the virtual machines that are using this device. A device may be used by several virtual machines; for example, a shared disk my be used simultaneously by two or more virtual machines. 7.100. GlusterBrickAdvancedDetails struct Table 7.136. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. device String fs_name String gluster_clients GlusterClient[ ] id String A unique identifier. memory_pools GlusterMemoryPool[ ] mnt_options String name String A human-readable name in plain text. pid Integer port Integer Table 7.137. Links summary Name Type Summary instance_type InstanceType Optionally references to an instance type the device is used by. template Template Optionally references to a template the device is used by. vm Vm Do not use this element, use vms instead. vms Vm[ ] References to the virtual machines that are using this device. 7.100.1. vms References to the virtual machines that are using this device. A device may be used by several virtual machines; for example, a shared disk my be used simultaneously by two or more virtual machines. 7.101. GlusterBrickMemoryInfo struct Table 7.138. Attributes summary Name Type Summary memory_pools GlusterMemoryPool[ ] 7.102. GlusterBrickStatus enum Table 7.139. Values summary Name Summary down Brick is in down state, the data cannot be stored or retrieved from it. unknown When the status cannot be determined due to host being non-responsive. up Brick is in up state, the data can be stored or retrieved from it. 7.103. GlusterClient struct Table 7.140. Attributes summary Name Type Summary bytes_read Integer bytes_written Integer client_port Integer host_name String 7.104. GlusterHook struct Table 7.141. Attributes summary Name Type Summary checksum String comment String Free text containing comments about this object. conflict_status Integer conflicts String content String content_type HookContentType description String A human-readable description in plain text. gluster_command String id String A unique identifier. name String A human-readable name in plain text. stage HookStage status GlusterHookStatus Table 7.142. Links summary Name Type Summary cluster Cluster server_hooks GlusterServerHook[ ] 7.105. GlusterHookStatus enum Table 7.143. Values summary Name Summary disabled Hook is disabled in the cluster. enabled Hook is enabled in the cluster. missing Unknown/missing hook status. 7.106. GlusterMemoryPool struct Table 7.144. Attributes summary Name Type Summary alloc_count Integer cold_count Integer comment String Free text containing comments about this object. description String A human-readable description in plain text. hot_count Integer id String A unique identifier. max_alloc Integer max_stdalloc Integer name String A human-readable name in plain text. padded_size Integer pool_misses Integer type String 7.107. GlusterServerHook struct Table 7.145. Attributes summary Name Type Summary checksum String comment String Free text containing comments about this object. content_type HookContentType description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. status GlusterHookStatus Table 7.146. Links summary Name Type Summary host Host 7.108. GlusterState enum Table 7.147. Values summary Name Summary down unknown up 7.109. GlusterVolume struct Table 7.148. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. disperse_count Integer id String A unique identifier. name String A human-readable name in plain text. options Option[ ] redundancy_count Integer replica_count Integer status GlusterVolumeStatus stripe_count Integer transport_types TransportType[ ] volume_type GlusterVolumeType Table 7.149. Links summary Name Type Summary bricks GlusterBrick[ ] cluster Cluster statistics Statistic[ ] 7.110. GlusterVolumeProfileDetails struct Table 7.150. Attributes summary Name Type Summary brick_profile_details BrickProfileDetail[ ] comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. nfs_profile_details NfsProfileDetail[ ] 7.111. GlusterVolumeStatus enum Table 7.151. Values summary Name Summary down Volume needs to be started, for clients to be able to mount and use it. unknown When the status cannot be determined due to host being non-responsive. up Volume is started, and can be mounted and used by clients. 7.112. GlusterVolumeType enum Type representing the type of Gluster Volume. Table 7.152. Values summary Name Summary disperse Dispersed volumes are based on erasure codes, providing space-efficient protection against disk or server failures. distribute Distributed volumes distributes files throughout the bricks in the volume. distributed_disperse Distributed dispersed volumes distribute files across dispersed subvolumes. distributed_replicate Distributed replicated volumes distributes files across replicated bricks in the volume. distributed_stripe Distributed striped volumes stripe data across two or more nodes in the cluster. distributed_striped_replicate Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. replicate Replicated volumes replicates files across bricks in the volume. stripe Striped volumes stripes data across bricks in the volume. striped_replicate Striped replicated volumes stripes data across replicated bricks in the cluster. 7.112.1. disperse Dispersed volumes are based on erasure codes, providing space-efficient protection against disk or server failures. Dispersed volumes an encoded fragment of the original file to each brick in a way that only a subset of the fragments is needed to recover the original file. The number of bricks that can be missing without losing access to data is configured by the administrator on volume creation time. 7.112.2. distribute Distributed volumes distributes files throughout the bricks in the volume. Distributed volumes can be used where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers. 7.112.3. distributed_disperse Distributed dispersed volumes distribute files across dispersed subvolumes. This has the same advantages of distribute replicate volumes, but using disperse to store the data into the bricks. 7.112.4. distributed_replicate Distributed replicated volumes distributes files across replicated bricks in the volume. Distributed replicated volumes can be used in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments. 7.112.5. distributed_stripe Distributed striped volumes stripe data across two or more nodes in the cluster. Distributed striped volumes should be used where the requirement is to scale storage and in high concurrency environments accessing very large files is critical. Note: With the introduction of Sharding in Glusterfs 3.7 releases, striped volumes are not recommended and it will be removed in future release. 7.112.6. distributed_striped_replicate Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, distributed striped replicated volumes should be used in highly concurrent environments where parallel access of very large files and performance is critical. Note: With the introduction of Sharding in Glusterfs 3.7 releases, striped volumes are not recommended and it will be removed in future release. 7.112.7. replicate Replicated volumes replicates files across bricks in the volume. Replicated volumes can be used in environments where high-availability and high-reliability are critical. 7.112.8. stripe Striped volumes stripes data across bricks in the volume. For best results, striped volumes should only in high concurrency environments accessing very large files. Note: With the introduction of Sharding in Glusterfs 3.7 releases, striped volumes are not recommended and it will be removed in future release. 7.112.9. striped_replicate Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, striped replicated volumes should be used in highly concurrent environments where there is parallel access of very large files and performance is critical. Note: With the introduction of Sharding in Glusterfs 3.7 releases, striped volumes are not recommended and it will be removed in future release. 7.113. GracePeriod struct Table 7.153. Attributes summary Name Type Summary expiry Integer 7.114. GraphicsConsole struct Table 7.154. Attributes summary Name Type Summary address String comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. port Integer protocol GraphicsType tls_port Integer Table 7.155. Links summary Name Type Summary instance_type InstanceType template Template vm Vm 7.115. GraphicsType enum The graphics protocol used to connect to the graphic console. Table 7.156. Values summary Name Summary spice Graphics protocol of type SPICE. vnc Graphics protocol of type VNC. 7.115.1. spice Graphics protocol of type SPICE. See SPICE documentation for more details. 7.115.2. vnc Graphics protocol of type VNC. VNC stands for Virtual Network Computing, and it is a graphical desktop sharing system that uses RFB (Remote Frame Buffer) protocol to remotely control another machine. 7.116. Group struct This type represents all groups in the directory service. Table 7.157. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. domain_entry_id String The containing directory service domain id. id String A unique identifier. name String A human-readable name in plain text. namespace String Namespace where group resides. Table 7.158. Links summary Name Type Summary domain Domain A link to the domain containing this group. permissions Permission[ ] A link to the permissions sub-collection for permissions attached to this group. roles Role[ ] A link to the roles sub-collection for roles attached to this group. tags Tag[ ] A link to the tags sub-collection for tags attached to this group. 7.116.1. roles A link to the roles sub-collection for roles attached to this group. Used only to represent the initial role assignments for a new group; thereafter, modification of role assignments is only supported via the roles sub-collection. 7.117. GuestOperatingSystem struct Represents an operating system installed on the virtual machine. To get that information send a request like this: The result will be like this: <vm href="/ovirt-engine/api/vms/123" id="123"> ... <guest_operating_system> <architecture>x86_64</architecture> <codename>Maipo</codename> <distribution>Red Hat Enterprise Linux Server</distribution> <family>Linux</family> <kernel> <version> <build>0</build> <full_version>3.10.0-514.10.2.el7.x86_64</full_version> <major>3</major> <minor>10</minor> <revision>514</revision> </version> </kernel> <version> <full_version>7.3</full_version> <major>7</major> <minor>3</minor> </version> </guest_operating_system> </vm> Table 7.159. Attributes summary Name Type Summary architecture String The architecture of the operating system, such as x86_64. codename String Code name of the operating system, such as Maipo . distribution String Full name of operating system distribution. family String Family of operating system, such as Linux . kernel Kernel Kernel version of the operating system. version Version Version of the installed operating system. 7.118. HardwareInformation struct Represents hardware information of host. To get that information send a request like this: The result will be like this: <host href="/ovirt-engine/api/hosts/123" id="123"> ... <hardware_information> <family>Red Hat Enterprise Linux</family> <manufacturer>Red Hat</manufacturer> <product_name>RHEV Hypervisor</product_name> <serial_number>01234567-89AB-CDEF-0123-456789ABCDEF</serial_number> <supported_rng_sources> <supported_rng_source>random</supported_rng_source> </supported_rng_sources> <uuid>12345678-9ABC-DEF0-1234-56789ABCDEF0</uuid> <version>1.2-34.5.el7ev</version> </hardware_information> ... </application> Table 7.160. Attributes summary Name Type Summary family String Type of host's CPU. manufacturer String Manufacturer of the host's machine and hardware vendor. product_name String Host's product name (for example RHEV Hypervisor ). serial_number String Unique ID for host's chassis. supported_rng_sources RngSource[ ] Supported sources of random number generator. uuid String Unique ID for each host. version String Unique name for each of the manufacturer. 7.119. HighAvailability struct Type representing high availability of a virtual machine. Table 7.161. Attributes summary Name Type Summary enabled Boolean Define if the virtual machine is considered highly available. priority Integer Indicates the priority of the virtual machine inside the run and migration queues. 7.119.1. enabled Define if the virtual machine is considered highly available. Configuring a VM lease is highly recommended (refer to that section) in order to prevent split-brain scenarios. Use a boot disk's storage-domain or any other active storage-domain. 7.119.2. priority Indicates the priority of the virtual machine inside the run and migration queues. Virtual machines with higher priorities will be started and migrated before virtual machines with lower priorities. The value is an integer between 0 and 100. The higher the value, the higher the priority. The graphical user interface (GUI) does not allow specifying all the possible values, instead it only allows you to select Low , Medium or High . When the value is set using the API, the GUI will set the label as follows: API Value GUI Label 0 - 25 Low 26 - 74 Medium 75 - 100 High When the label is selected using the GUI, the value in the API will be set as follows: GUI Label API Value Low 1 Medium 50 High 100 7.120. Hook struct Represents a hook. Table 7.162. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. event_name String Name of the event to execute the hook on. id String A unique identifier. md5 String Checksum of the hook. name String A human-readable name in plain text. Table 7.163. Links summary Name Type Summary host Host Reference to the host the hook belongs to. 7.121. HookContentType enum Represents content type of hook script. Table 7.164. Values summary Name Summary binary Binary content type of the hook. text Text content type of the hook. 7.122. HookStage enum Type represents a stage of volume event at which hook executes. Table 7.165. Values summary Name Summary post Stage after start of volume. pre Stage before start of volume. 7.123. HookStatus enum Type represents the status of a hook. Table 7.166. Values summary Name Summary disabled Hook is disabled. enabled Hook is enabled. missing Hook is missing. 7.124. Host struct Type representing a host. Table 7.167. Attributes summary Name Type Summary address String The host address (FQDN/IP). auto_numa_status AutoNumaStatus The host auto non uniform memory access (NUMA) status. certificate Certificate The host certificate. comment String Free text containing comments about this object. cpu Cpu The CPU type of this host. description String A human-readable description in plain text. device_passthrough HostDevicePassthrough Specifies whether host device passthrough is enabled on this host. display Display Optionally specify the display address of this host explicitly. external_status ExternalStatus The host external status. hardware_information HardwareInformation The host hardware information. hosted_engine HostedEngine The self-hosted engine status of this host. id String A unique identifier. iscsi IscsiDetails The host iSCSI details. kdump_status KdumpStatus The host KDUMP status. ksm Ksm Kernel SamePage Merging (KSM) reduces references to memory pages from multiple identical pages to a single page reference. libvirt_version Version The host libvirt version. max_scheduling_memory Integer The max scheduling memory on this host in bytes. memory Integer The amount of physical memory on this host in bytes. name String A human-readable name in plain text. network_operation_in_progress Boolean Specifies whether a network-related operation, such as 'setup networks', 'sync networks', or 'refresh capabilities', is currently being executed on this host. numa_supported Boolean Specifies whether non uniform memory access (NUMA) is supported on this host. os OperatingSystem The operating system on this host. override_iptables Boolean Specifies whether we should override firewall definitions. ovn_configured Boolean Indicates if the host has correctly configured OVN. port Integer The host port. power_management PowerManagement The host power management definitions. protocol HostProtocol The protocol that the engine uses to communicate with the host. reinstallation_required Boolean Specifies whether the host should be reinstalled. root_password String When creating a new host, a root password is required if the password authentication method is chosen, but this is not subsequently included in the representation. se_linux SeLinux The host SElinux status. spm Spm The host storage pool manager (SPM) status and definition. ssh Ssh The SSH definitions. status HostStatus The host status. status_detail String The host status details. summary VmSummary The virtual machine summary - how many are active, migrating and total. transparent_huge_pages TransparentHugePages Transparent huge page support expands the size of memory pages beyond the standard 4 KiB limit. type HostType Indicates if the host contains a full installation of the operating system or a scaled-down version intended only to host virtual machines. update_available Boolean Specifies whether there is an oVirt-related update on this host. version Version The version of VDSM. vgpu_placement VgpuPlacement Specifies the vGPU placement strategy. 7.124.1. external_status The host external status. This can be used by third-party software to change the host external status in case of an issue. This has no effect on the host lifecycle, unless a third-party software checks for this status and acts accordingly. 7.124.2. hosted_engine The self-hosted engine status of this host. Important When a host or collection of hosts is retrieved, this attribute is not included unless the all_content parameter of the operation is explicitly set to true . See the documentation of the operations that retrieve one or multiple hosts for details. 7.124.3. kdump_status The host KDUMP status. KDUMP happens when the host kernel has crashed and it is now going through memory dumping. 7.124.4. ksm Kernel SamePage Merging (KSM) reduces references to memory pages from multiple identical pages to a single page reference. This helps with optimization for memory density. For example, to enable KSM for host 123 , send a request like this: With a request body like this: <host> <ksm> <enabled>true</enabled> </ksm> </host> 7.124.5. libvirt_version The host libvirt version. For more information on libvirt please go to libvirt . 7.124.6. network_operation_in_progress Specifies whether a network-related operation, such as 'setup networks', 'sync networks', or 'refresh capabilities', is currently being executed on this host. Note The header All-Content:true must be added to the request in order for this attribute to be included in the response. 7.124.7. override_iptables Specifies whether we should override firewall definitions. This applies only when the host is installed or re-installed. 7.124.8. protocol The protocol that the engine uses to communicate with the host. Warning Since version 4.1 of the engine the protocol is always set to stomp since xml was removed. 7.124.9. se_linux The host SElinux status. Security-Enhanced Linux (SELinux) is a component in the Linux kernel that provides a mechanism for supporting access control security policies. 7.124.10. spm The host storage pool manager (SPM) status and definition. Use it to set the SPM priority of this host, and to see whether this is the current SPM or not. 7.124.11. status_detail The host status details. Relevant for Gluster hosts. 7.124.12. transparent_huge_pages Transparent huge page support expands the size of memory pages beyond the standard 4 KiB limit. This reduces memory consumption and increases host performance. For example, to enable transparent huge page support for host 123 , send a request like this: With a request body like this: <host> <transparent_hugepages> <enabled>true</enabled> </transparent_hugepages> </host> 7.124.13. version The version of VDSM. For example: This GET request will return the following output: <host> ... <version> <build>999</build> <full_version>vdsm-4.18.999-419.gitcf06367.el7</full_version> <major>4</major> <minor>18</minor> <revision>0</revision> </version> ... </host> Table 7.168. Links summary Name Type Summary affinity_labels AffinityLabel[ ] agents Agent[ ] cluster Cluster cpu_units HostCpuUnit[ ] List of all host's CPUs with detailed information about the topology (socket, core) and with information about the current CPU pinning. devices HostDevice[ ] external_host_provider ExternalHostProvider external_network_provider_configurations ExternalNetworkProviderConfiguration[ ] External network providers provisioned on the host. hooks Hook[ ] katello_errata KatelloErratum[ ] Lists all the Katello errata assigned to the host. network_attachments NetworkAttachment[ ] nics HostNic[ ] numa_nodes NumaNode[ ] permissions Permission[ ] statistics Statistic[ ] Each host resource exposes a statistics sub-collection for host-specific statistics. storage_connection_extensions StorageConnectionExtension[ ] storages HostStorage[ ] tags Tag[ ] unmanaged_networks UnmanagedNetwork[ ] 7.124.14. cpu_units List of all host's CPUs with detailed information about the topology (socket, core) and with information about the current CPU pinning. You will receive response in XML like this one: <host_cpu_units> <host_cpu_unit> <core_id>0</core_id> <cpu_id>0</cpu_id> <socket_id>0</socket_id> <vms> <vm href="/ovirt-engine/api/vms/def" id="def" /> </vms> </host_cpu_unit> <host_cpu_unit> <core_id>0</core_id> <cpu_id>1</cpu_id> <socket_id>1</socket_id> <runs_vdsm>true</runs_vdsm> </host_cpu_unit> <host_cpu_unit> <core_id>0</core_id> <cpu_id>2</cpu_id> <socket_id>2</socket_id> </host_cpu_unit> </host_cpu_units> 7.124.15. external_network_provider_configurations External network providers provisioned on the host. This attribute is read-only. Setting it will have no effect on the host. The value of this parameter reflects the Default Network Provider of the cluster. 7.124.16. katello_errata Lists all the Katello errata assigned to the host. You will receive response in XML like this one: <katello_errata> <katello_erratum href="/ovirt-engine/api/katelloerrata/456" id="456"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> ... </packages> </katello_erratum> ... </katello_errata> 7.124.17. statistics Each host resource exposes a statistics sub-collection for host-specific statistics. An example of an XML representation: <statistics> <statistic href="/ovirt-engine/api/hosts/123/statistics/456" id="456"> <name>memory.total</name> <description>Total memory</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>25165824000</datum> </value> </values> <host href="/ovirt-engine/api/hosts/123" id="123"/> </statistic> ... </statistics> Note This statistics sub-collection is read-only. The following list shows the statistic types for hosts: Name Description memory.total Total memory in bytes on the host. memory.used Memory in bytes used on the host. memory.free Memory in bytes free on the host. memory.shared Memory in bytes shared on the host. memory.buffers I/O buffers in bytes. memory.cached OS caches in bytes. swap.total Total swap memory in bytes on the host. swap.free Swap memory in bytes free on the host. swap.used Swap memory in bytes used on the host. swap.cached Swap memory in bytes also cached in host's memory. ksm.cpu.current Percentage of CPU usage for Kernel SamePage Merging. cpu.current.user Percentage of CPU usage for user slice. cpu.current.system Percentage of CPU usage for system. cpu.current.idle Percentage of idle CPU usage. cpu.load.avg.5m CPU load average per five minutes. boot.time Boot time of the machine. 7.125. HostCpuUnit struct Type representing a physical CPU of a host with the current pinning status. Table 7.169. Attributes summary Name Type Summary comment String Free text containing comments about this object. core_id Integer The id of the core the CPU belongs to. cpu_id Integer The id of the CPU. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. runs_vdsm Boolean A flag indicating that the CPU runs the VDSM socket_id Integer The id of the socket the CPU belongs to. Table 7.170. Links summary Name Type Summary vms Vm[ ] A list of VMs that has its virtual CPU pinned to this physical CPU. 7.126. HostDevice struct Table 7.171. Attributes summary Name Type Summary capability String comment String Free text containing comments about this object. description String A human-readable description in plain text. driver String The name of the driver this device is bound to. id String A unique identifier. iommu_group Integer m_dev_types MDevType[ ] List of all supported mdev types on the physical device, name String A human-readable name in plain text. physical_function HostDevice placeholder Boolean product Product vendor Vendor virtual_functions Integer 7.126.1. driver The name of the driver this device is bound to. For example: pcieport or uhci_hcd . Table 7.172. Links summary Name Type Summary host Host parent_device HostDevice vm Vm 7.127. HostDevicePassthrough struct Table 7.173. Attributes summary Name Type Summary enabled Boolean 7.128. HostNic struct Represents a host NIC. For example, the XML representation of a host NIC looks like this: <host_nic href="/ovirt-engine/api/hosts/123/nics/456" id="456"> <name>eth0</name> <boot_protocol>static</boot_protocol> <bridged>true</bridged> <custom_configuration>true</custom_configuration> <ip> <address>192.168.122.39</address> <gateway>192.168.122.1</gateway> <netmask>255.255.255.0</netmask> <version>v4</version> </ip> <ipv6> <gateway>::</gateway> <version>v6</version> </ipv6> <ipv6_boot_protocol>none</ipv6_boot_protocol> <mac> <address>52:54:00:0c:79:1d</address> </mac> <mtu>1500</mtu> <status>up</status> </host_nic> A bonded interface is represented as a HostNic object containing the bonding and slaves attributes. For example, the XML representation of a bonded host NIC looks like this: <host_nic href="/ovirt-engine/api/hosts/123/nics/456" id="456"> <name>bond0</name> <mac address="00:00:00:00:00:00"/> <ip> <address>192.168.122.39</address> <gateway>192.168.122.1</gateway> <netmask>255.255.255.0</netmask> <version>v4</version> </ip> <boot_protocol>dhcp</boot_protocol> <bonding> <options> <option> <name>mode</name> <value>4</value> <type>Dynamic link aggregation (802.3ad)</type> </option> <option> <name>miimon</name> <value>100</value> </option> </options> <slaves> <host_nic id="123"/> <host_nic id="456"/> </slaves> </bonding> <mtu>1500</mtu> <bridged>true</bridged> <custom_configuration>false</custom_configuration> </host_nic> Table 7.174. Attributes summary Name Type Summary ad_aggregator_id Integer The ad_aggregator_id property of a bond or bond slave, for bonds in mode 4. base_interface String The base interface of the NIC. bonding Bonding The bonding parameters of the NIC. boot_protocol BootProtocol The IPv4 boot protocol configuration of the NIC. bridged Boolean Defines the bridged network status. check_connectivity Boolean comment String Free text containing comments about this object. custom_configuration Boolean description String A human-readable description in plain text. id String A unique identifier. ip Ip The IPv4 address of the NIC. ipv6 Ip The IPv6 address of the NIC. ipv6_boot_protocol BootProtocol The IPv6 boot protocol configuration of the NIC. mac Mac The MAC address of the NIC. mtu Integer The maximum transmission unit for the interface. name String A human-readable name in plain text. override_configuration Boolean properties Property[ ] speed Integer status NicStatus virtual_functions_configuration HostNicVirtualFunctionsConfiguration Describes the virtual functions configuration of a physical function NIC. vlan Vlan 7.128.1. ad_aggregator_id The ad_aggregator_id property of a bond or bond slave, for bonds in mode 4. Bond mode 4 is the 802.3ad standard, also called dynamic link aggregation. (See Wikipedia and Presentation for more information). This is only valid for bonds in mode 4, or NICs which are part of a bond. It is not present for bonds in other modes, or NICs which are not part of a bond in mode 4. The ad_aggregator_id property indicates which of the bond slaves are active. The value of the ad_aggregator_id of an active slave is the same as the value of the ad_aggregator_id property of the bond. This parameter is read only. Setting it will have no effect on the bond/NIC. It is retrieved from the /sys/class/net/bondX/bonding/ad_aggregator file for a bond, and the /sys/class/net/ensX/bonding_slave/ad_aggregator_id file for a NIC. 7.128.2. bridged Defines the bridged network status. Set to true for a bridged network and false for a bridgeless network. Table 7.175. Links summary Name Type Summary host Host network Network A reference to the network to which the interface should be connected. network_labels NetworkLabel[ ] The labels that are applied to this NIC. physical_function HostNic A reference to the physical function NIC of a SR-IOV virtual function NIC. qos Qos A link to the quality-of-service configuration of the interface. statistics Statistic[ ] A link to the statistics of the NIC. 7.128.3. network A reference to the network to which the interface should be connected. A blank network ID is allowed. 7.128.4. statistics A link to the statistics of the NIC. The data types for HostNic statistical values: data.current.rx - The rate in bytes per second of data received. data.current.tx - The rate in bytes per second of data transmitted. data.current.rx.bps - The rate in bits per second of data received (since version 4.2). data.current.tx.bps - The rate in bits per second of data transmitted (since version 4.2). data.total.rx - Total received data. data.total.tx - Total transmitted data. errors.total.rx - Total errors from receiving data. errors.total.tx - Total errors from transmitting data. 7.129. HostNicVirtualFunctionsConfiguration struct Describes the virtual functions configuration of an SR-IOV-enabled physical function NIC. Table 7.176. Attributes summary Name Type Summary all_networks_allowed Boolean Defines whether all networks are allowed to be defined on the related virtual functions, or specified ones only. max_number_of_virtual_functions Integer The maximum number of virtual functions the NIC supports. number_of_virtual_functions Integer The number of virtual functions currently defined. 7.129.1. max_number_of_virtual_functions The maximum number of virtual functions the NIC supports. This property is read-only. 7.129.2. number_of_virtual_functions The number of virtual functions currently defined. A user-defined value between 0 and max_number_of_virtual_functions . 7.130. HostProtocol enum The protocol used by the engine to communicate with a host. Warning Since version 4.1 of the engine the protocol is always set to stomp since xml was removed. Table 7.177. Values summary Name Summary stomp JSON-RPC protocol on top of STOMP. xml XML-RPC protocol. 7.131. HostStatus enum Type representing a host status. Table 7.178. Values summary Name Summary connecting The engine cannot communicate with the host for a specific threshold so it is now trying to connect before going through fencing. down The host is down. error The host is in error status. initializing The host is initializing. install_failed The host installation failed. installing The host is being installed. installing_os The host operating system is now installing. kdumping The host kernel has crashed and it is now going through memory dumping. maintenance The host is in maintenance status. non_operational The host is non operational. non_responsive The host is not responsive. pending_approval The host is pending administrator approval. preparing_for_maintenance The host is preparing for maintenance. reboot The host is being rebooted. unassigned The host is in activation process. up The host is up. 7.131.1. error The host is in error status. This will happen if we will try to run a virtual machine several times and it will fail. 7.131.2. initializing The host is initializing. This is an intermediate step before moving the host to 'up' status. 7.131.3. install_failed The host installation failed. In such cases look at the event log to understand what failed the installation, and issue a re-install. 7.131.4. installing_os The host operating system is now installing. This status is relevant when using a Satellite/Foreman provider, and issuing a bare-metal provisioning (discovered host provisioning). 7.131.5. maintenance The host is in maintenance status. When a host is in maintenance it cannot run virtual machines. 7.131.6. non_operational The host is non operational. This can happen due to various reasons, such as not having a connection with the storage, not supporting a mandatory network, not supporting the cluster level, and more. 7.131.7. non_responsive The host is not responsive. This means that the engine is not able to communicate with the host. 7.131.8. pending_approval The host is pending administrator approval. This is relevant only for vintage ovirt-node / RHV-H. This property is no longer relevant since Vintage Node is no longer supported, and has been deprecated. 7.131.9. preparing_for_maintenance The host is preparing for maintenance. During this time the engine makes sure to live migrate all the virtual machines from this host to other hosts. Once all migrations have been completed the host will move to 'maintenance' status. 7.132. HostStorage struct Table 7.179. Attributes summary Name Type Summary address String comment String Free text containing comments about this object. description String A human-readable description in plain text. driver_options Property[ ] The options to be passed when creating a storage domain using a cinder driver. driver_sensitive_options Property[ ] Parameters containing sensitive information, to be passed when creating a storage domain using a cinder driver. id String A unique identifier. logical_units LogicalUnit[ ] mount_options String name String A human-readable name in plain text. nfs_retrans Integer The number of times to retry a request before attempting further recovery actions. nfs_timeo Integer The time in tenths of a second to wait for a response before retrying NFS requests. nfs_version NfsVersion override_luns Boolean password String path String port Integer portal String target String type StorageType username String vfs_type String volume_group VolumeGroup 7.132.1. driver_options The options to be passed when creating a storage domain using a cinder driver. For example (Kaminario backend): <storage_domain> <name>kamniraio-cinder</name> <type>managed_block_storage</type> <storage> <type>managed_block_storage</type> <driver_options> <property> <name>san_ip</name> <value>192.168.1.1</value> </property> <property> <name>san_login</name> <value>username</value> </property> <property> <name>san_password</name> <value>password</value> </property> <property> <name>use_multipath_for_image_xfer</name> <value>true</value> </property> <property> <name>volume_driver</name> <value>cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver</value> </property> </driver_options> </storage> <host> <name>host</name> </host> </storage_domain> 7.132.2. driver_sensitive_options Parameters containing sensitive information, to be passed when creating a storage domain using a cinder driver. These parameters are encrypted when they are saved. For example, the following XML encrypts and saves a username, password and SAN IP address: <storage_domain> <name>kamniraio-cinder</name> <type>managed_block_storage</type> <storage> <type>managed_block_storage</type> <driver_options> <property> <name>san_ip</name> <value>192.168.1.1</value> </property> <property> <name>san_login</name> <value>username</value> </property> <property> <name>san_password</name> <value>password</value> </property> <property> <name>use_multipath_for_image_xfer</name> <value>true</value> </property> <property> <name>volume_driver</name> <value>cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver</value> </property> </driver_options> <driver_sensitive_options> <property> <name>username</name> <value>admin</value> </property> <property> <name>password</name> <value>123</value> </property> <property> <name>san_ip</name> <value>192.168.1.1</value> </property> </driver_sensitive_options> </storage> <host> <name>host</name> </host> </storage_domain> 7.132.3. nfs_retrans The number of times to retry a request before attempting further recovery actions. The value must be in the range of 0 to 65535. For more details see the description of the retrans mount option in the nfs man page. 7.132.4. nfs_timeo The time in tenths of a second to wait for a response before retrying NFS requests. The value must be in the range of 0 to 65535. For more details see the description of the timeo mount option in the nfs man page. Table 7.180. Links summary Name Type Summary host Host 7.133. HostType enum This enumerated type is used to determine which type of operating system is used by the host. Table 7.181. Values summary Name Summary ovirt_node The host contains Red Hat Virtualization Host (RHVH): a new implementation of Red Hat Enterprise Virtualization Hypervisor (RHEV-H) which uses the same installer as Red Hat Enterprise Linux, CentOS, or Fedora. rhel The host contains a full Red Hat Enterprise Linux, CentOS, or Fedora installation. rhev_h The host contains Red Hat Enterprise Virtualization Hypervisor (RHEV-H), a small-scaled version of Red Hat Enterprise Linux, CentOS, or Fedora, used solely to host virtual machines. 7.133.1. ovirt_node The host contains Red Hat Virtualization Host (RHVH): a new implementation of Red Hat Enterprise Virtualization Hypervisor (RHEV-H) which uses the same installer as Red Hat Enterprise Linux, CentOS, or Fedora. The main difference between RHVH and legacy RHEV-H is that RHVH has a writeable file system and will handle its own installation instead of having RPMs pushed to it by the Manager like in legacy RHEV-H. 7.133.2. rhev_h The host contains Red Hat Enterprise Virtualization Hypervisor (RHEV-H), a small-scaled version of Red Hat Enterprise Linux, CentOS, or Fedora, used solely to host virtual machines. This property is no longer relevant since Vintage Node is no longer supported, and has been deprecated. 7.134. HostedEngine struct Table 7.182. Attributes summary Name Type Summary active Boolean configured Boolean global_maintenance Boolean local_maintenance Boolean score Integer 7.135. Icon struct Icon of virtual machine or template. Table 7.183. Attributes summary Name Type Summary comment String Free text containing comments about this object. data String Base64 encode content of the icon file. description String A human-readable description in plain text. id String A unique identifier. media_type String Format of icon file. name String A human-readable name in plain text. 7.135.1. media_type Format of icon file. One of: image/jpeg image/png image/gif 7.136. Identified struct This interface is the base model for all types that represent objects with an identifier. Table 7.184. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. 7.137. Image struct Represents an image entity. Table 7.185. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. size Integer The size of the image file. type ImageFileType The type of the image file. Table 7.186. Links summary Name Type Summary storage_domain StorageDomain The storage domain associated with this image. 7.138. ImageFileType enum Represents the file type of an image. Table 7.187. Values summary Name Summary disk The image is a disk format that can be used as a virtual machine's disk. floppy The image is a floppy disk that can be attached to a virtual machine, for example to install the VirtIO drivers in Windows. iso The image is a `. 7.138.1. iso The image is a .iso file that can be used as a CD-ROM to boot and install a virtual machine. 7.139. ImageTransfer struct This type contains information regarding an image transfer being performed. Table 7.188. Attributes summary Name Type Summary active Boolean Indicates whether there's at least one active session for this transfer, i,e there's at least one live transfer session between the client and the daemon. comment String Free text containing comments about this object. description String A human-readable description in plain text. direction ImageTransferDirection The direction indicates whether the transfer is sending image data ( upload ) or receiving image data ( download ). format DiskFormat The format of the data sent during upload or received during download. id String A unique identifier. inactivity_timeout Integer The timeout in seconds of client inactivity, after which the transfer is aborted by the Red Hat Virtualization Manager. name String A human-readable name in plain text. phase ImageTransferPhase The current phase of the image transfer in progress. proxy_url String The URL of the proxy server that the user inputs or outputs to. shallow Boolean Download only the specified image instead of the entire image chain. timeout_policy ImageTransferTimeoutPolicy Timeout policy determines how the system handles the transfer when a client is idle for more than inactivityTimeout. transfer_url String The URL of the daemon server that the user can input or output to directly. transferred Integer Indicates the amount of transferred bytes. 7.139.1. direction The direction indicates whether the transfer is sending image data ( upload ) or receiving image data ( download ). If a direction is not set during an addition of a new transfer, The default direction for the transfer will be upload . 7.139.2. format The format of the data sent during upload or received during download. If not specified, defaults to disk's format. 7.139.3. inactivity_timeout The timeout in seconds of client inactivity, after which the transfer is aborted by the Red Hat Virtualization Manager. To disable the inactivity timeout specify '0'. If not specified, the value is defaulted to the engine-config value: TransferImageClientInactivityTimeoutInSeconds. 7.139.4. phase The current phase of the image transfer in progress. Each transfer needs a managed session, which must be opened for the user to input or output an image. Please refer to image transfer for further documentation. 7.139.5. proxy_url The URL of the proxy server that the user inputs or outputs to. This attribute is available only if the image transfer is in the transferring phase. See phase for details. 7.139.6. shallow Download only the specified image instead of the entire image chain. If true, when using format="raw" and direction="download", the transfer includes data only from the specified disk snapshot, and unallocated areas are reported as holes. By default, the transfer includes data from all disk snapshots. When specifying a disk snapshot, the transfer includes only data for the specified disk snapshot. When specifying a disk, the transfer includes only data from the active disk snaphost. This parameter has no effect when not using format="raw" or for direction="upload". Example: Downloading a single snapshot: <image_transfer> <snapshot id="2fb24fa2-a5db-446b-b733-4654661cd56d"/> <direction>download</direction> <format>raw</format> <shallow>true</shallow> </image_transfer> To download the active snapshot disk image (which is not accessible as a disk snapshot), specify the disk: <image_transfer> <disk id="ff6be46d-ef5d-41d6-835c-4a68e8956b00"/> <direction>download</direction> <format>raw</format> <shallow>true</shallow> </image_transfer> In both cases you can now download a qcow2 image using imageio client: from ovirt_imageio import client client.download( transfer.transfer_url, "51275e7d-42e9-491f-9d65-b9211c897eac", backing_file="07c0ccac-0845-4665-9097-d0a3b16cf43b", backing_format="qcow2") 7.139.7. transfer_url The URL of the daemon server that the user can input or output to directly. This is as an alternative to the proxy_url . I.e. if the client has access to the host machine, it could bypass the proxy and transfer directly to the host, potentially improving the throughput performance. This attribute is available only if the image transfer is in the transferring phase. See phase for details. Table 7.189. Links summary Name Type Summary backup Backup The backup associated with the image transfer. disk Disk The disk which is targeted for input or output. host Host The host which will be used to write to the image which is targeted for input or output. image Image The image which is targeted for input or output. snapshot DiskSnapshot The disk snapshot which is targeted for input or output. 7.139.8. backup The backup associated with the image transfer. Specify when initiating an image transfer for a disk that is part of a backup. 7.139.9. host The host which will be used to write to the image which is targeted for input or output. If not specified, an active host will be randomly selected from the data center. 7.139.10. image The image which is targeted for input or output. Important This attribute is deprecated since version 4.2 of the engine. Use the disk or snapshot attributes instead. 7.140. ImageTransferDirection enum The image transfer direction for a transfer. When adding a new transfer, the user can choose whether the transfer will be to an image, choosing upload , or to transfer from an image- choosing download as an ImageTransferDirection. Please refer to image transfer for further documentation. Table 7.190. Values summary Name Summary download The user must choose download when he/she wants to stream data from an image. upload The user can choose upload when he/she wants to stream data to an image. 7.141. ImageTransferPhase enum A list of possible phases for an image transfer entity. Each of these values defines a specific point in a transfer flow. Please refer to image transfer for more information. Table 7.191. Values summary Name Summary cancelled This phase will be set as a result of the user cancelling the transfer. cancelled_system This phase will be set as a result of the system cancelling the transfer. cancelled_user This phase will be set as a result of the user cancelling the transfer. finalizing_cleanup This phase indicates that the user cancelled the transfer, and necessary cleanup is being done. finalizing_failure This phase can only be set in the Administration Portal, and indicates that there was an error during the transfer, and it is being finalized with a failure. finalizing_success This phase will be set when the user calls finalize . finished_cleanup This phase indicates that the user cancelled the transfer, and necessary cleanup is done. finished_failure Indicates that the targeted image failed the verification, and cannot be used. finished_success Indicates that the transfer session was successfully closed, and the targeted image was verified and ready to be used. initializing The initial phase of an image transfer. paused_system This phase means the session timed out, or some other error occurred with this transfer; for example ovirt-imageio is not running in the selected host. paused_user This phase is a result of a pause call by the user, using pause . resuming The phase where the transfer has been resumed by the client calling resume . transferring The phase where the transfer session is open, and the client can input or output the desired image using the preferred tools. unknown An unknown phase. 7.141.1. cancelled This phase will be set as a result of the user cancelling the transfer. The cancellation can only be performed in the Administration Portal. 7.141.2. finalizing_success This phase will be set when the user calls finalize . Calling finalize is essential to finish the transfer session, and finish using the targeted image. After finalizing, the phase will be changed to finished_success or finished_failure . Refer to image transfer for more information. 7.141.3. finished_failure Indicates that the targeted image failed the verification, and cannot be used. After reaching this phase, the image transfer entity will be deleted, and the targeted image will be set to illegal. System cancelling the transfer will also result in this. 7.141.4. finished_success Indicates that the transfer session was successfully closed, and the targeted image was verified and ready to be used. After reaching this phase, the image transfer entity will be deleted. 7.141.5. initializing The initial phase of an image transfer. It is set while the transfer session is establishing. Once the session is established, the phase will be changed to transferring 7.141.6. paused_system This phase means the session timed out, or some other error occurred with this transfer; for example ovirt-imageio is not running in the selected host. To resume the session, the client should call resume . After resuming, the phase will change to resuming . 7.141.7. resuming The phase where the transfer has been resumed by the client calling resume . Resuming starts a new session, and after calling it, the phase will be changed to transferring , or paused_system in case of a failure. 7.141.8. unknown An unknown phase. This will only be set in cases of unpredictable errors. 7.142. ImageTransferTimeoutPolicy enum The image transfer timeout policy. Define how the system handles a transfer when the client is inactive for inactivityTimeout seconds. Please refer to image transfer for further documentation. Table 7.192. Values summary Name Summary cancel Cancel the transfer and unlock the disk. legacy LEGACY policy will preserve the legacy functionality which is the default. pause Pause the transfer. 7.142.1. cancel Cancel the transfer and unlock the disk. For image transfer using upload direction, the disk is deleted. 7.142.2. legacy LEGACY policy will preserve the legacy functionality which is the default. The default behaviour will cancel the transfer if the direction is download, and pause it if its upload. 7.142.3. pause Pause the transfer. The transfer can be resumed or canceled by the user. The disk will remain locked while the transfer is paused. 7.143. InheritableBoolean enum Enum representing the boolean value that can be either set, or inherited from a higher level. The inheritance order is virtual machine cluster engine-config. Table 7.193. Values summary Name Summary false Set the value to false on this level. inherit Inherit the value from higher level. true Set the value to true on this level. 7.144. Initialization struct Table 7.194. Attributes summary Name Type Summary active_directory_ou String authorized_ssh_keys String cloud_init CloudInit Deprecated attribute to specify cloud-init configuration. cloud_init_network_protocol CloudInitNetworkProtocol Attribute specifying the cloud-init protocol to use for formatting the cloud-init network parameters. configuration Configuration custom_script String dns_search String dns_servers String domain String host_name String input_locale String nic_configurations NicConfiguration[ ] org_name String regenerate_ids Boolean regenerate_ssh_keys Boolean root_password String system_locale String timezone String ui_language String user_locale String user_name String windows_license_key String 7.144.1. cloud_init Deprecated attribute to specify cloud-init configuration. This attribute and the CloudInit type have been deprecated and will be removed in the future. To specify the cloud-init configuration, use the attributes inside the Initialization type. The mapping between the attributes of these two types are as follows: CloudInit Initialization authorized_keys authorized_ssh_keys dns.search_domains dns_search dns.servers dns_servers files custom_script host host_name network_configuration.nics nic_configurations regenerate_ssh_keys regenerate_ssh_keys timezone timezone users user_name & root_password For more details on how to use cloud-init see the examples in Python and Java . 7.144.2. cloud_init_network_protocol Attribute specifying the cloud-init protocol to use for formatting the cloud-init network parameters. If omitted, a default value is used, as described in the CloudInitNetworkProtocol 7.145. InstanceType struct Describes the hardware configuration of virtual machines. For example medium instance type includes 1 virtual CPU and 4 GiB of memory. It is a top-level entity (e.g. not bound to any data center or cluster). The attributes that are used for instance types and are common to virtual machine and template types are: console cpu custom_cpu_model custom_emulated_machine display high_availability io memory memory_policy migration migration_downtime os rng_device soundcard_enabled usb virtio_scsi When creating a virtual machine from both an instance type and a template, the virtual machine will inherit the hardware configurations from the instance type Note An instance type inherits it's attributes from the template entity although most template attributes are not used in instance types. Table 7.195. Attributes summary Name Type Summary auto_pinning_policy AutoPinningPolicy Specifies if and how the auto CPU and NUMA configuration is applied. bios Bios Reference to virtual machine's BIOS configuration. comment String Free text containing comments about this object. console Console Console configured for this virtual machine. cpu Cpu The configuration of the virtual machine CPU. cpu_pinning_policy CpuPinningPolicy Specifies if and how the CPU and NUMA configuration is applied. cpu_shares Integer creation_time Date The virtual machine creation date. custom_compatibility_version Version Virtual machine custom compatibility version. custom_cpu_model String custom_emulated_machine String custom_properties CustomProperty[ ] Properties sent to VDSM to configure various hooks. delete_protected Boolean If true , the virtual machine cannot be deleted. description String A human-readable description in plain text. display Display The virtual machine display configuration. domain Domain Domain configured for this virtual machine. high_availability HighAvailability The virtual machine high availability configuration. id String A unique identifier. initialization Initialization Reference to the virtual machine's initialization configuration. io Io For performance tuning of IO threading. large_icon Icon Virtual machine's large icon. lease StorageDomainLease Reference to the storage domain this virtual machine/template lease reside on. memory Integer The virtual machine's memory, in bytes. memory_policy MemoryPolicy Reference to virtual machine's memory management configuration. migration MigrationOptions Reference to configuration of migration of a running virtual machine to another host. migration_downtime Integer Maximum time the virtual machine can be non responsive during its live migration to another host in ms. multi_queues_enabled Boolean If true , each virtual interface will get the optimal number of queues, depending on the available virtual Cpus. name String A human-readable name in plain text. origin String The origin of this virtual machine. os OperatingSystem Operating system type installed on the virtual machine. placement_policy VmPlacementPolicy The configuration of the virtual machine's placement policy. rng_device RngDevice Random Number Generator device configuration for this virtual machine. serial_number SerialNumber Virtual machine's serial number in a cluster. small_icon Icon Virtual machine's small icon. soundcard_enabled Boolean If true , the sound card is added to the virtual machine. sso Sso Reference to the Single Sign On configuration this virtual machine is configured for. start_paused Boolean If true , the virtual machine will be initially in 'paused' state after start. stateless Boolean If true , the virtual machine is stateless - it's state (disks) are rolled-back after shutdown. status TemplateStatus The status of the template. storage_error_resume_behaviour VmStorageErrorResumeBehaviour Determines how the virtual machine will be resumed after storage error. time_zone TimeZone The virtual machine's time zone set by oVirt. tpm_enabled Boolean If true , a TPM device is added to the virtual machine. tunnel_migration Boolean If true , the network data transfer will be encrypted during virtual machine live migration. type VmType Determines whether the virtual machine is optimized for desktop or server. usb Usb Configuration of USB devices for this virtual machine (count, type). version TemplateVersion Indicates whether this is the base version or a sub-version of another template. virtio_scsi VirtioScsi Reference to VirtIO SCSI configuration. virtio_scsi_multi_queues Integer Number of queues for a Virtio-SCSI contoller this field requires virtioScsiMultiQueuesEnabled to be true see virtioScsiMultiQueuesEnabled for more info virtio_scsi_multi_queues_enabled Boolean If true , the Virtio-SCSI devices will obtain a number of multiple queues depending on the available virtual Cpus and disks, or according to the specified virtioScsiMultiQueues. vm Vm The virtual machine configuration associated with this template. 7.145.1. auto_pinning_policy Specifies if and how the auto CPU and NUMA configuration is applied. Important Since version 4.5 of the engine this operation is deprecated, and preserved only for backwards compatibility. It might be removed in the future. Please use CpuPinningPolicy instead. 7.145.2. cpu The configuration of the virtual machine CPU. The socket configuration can be updated without rebooting the virtual machine. The cores and the threads require a reboot. For example, to change the number of sockets to 4 immediately, and the number of cores and threads to 2 after reboot, send the following request: With a request body: <vm> <cpu> <topology> <sockets>4</sockets> <cores>2</cores> <threads>2</threads> </topology> </cpu> </vm> 7.145.3. cpu_pinning_policy Specifies if and how the CPU and NUMA configuration is applied. When not specified the behavior of CPU pinning string will determine CpuPinningPolicy to None or Manual. 7.145.4. custom_compatibility_version Virtual machine custom compatibility version. Enables a virtual machine to be customized to its own compatibility version. If custom_compatibility_version is set, it overrides the cluster's compatibility version for this particular virtual machine. The compatibility version of a virtual machine is limited by the data center the virtual machine resides in, and is checked against capabilities of the host the virtual machine is planned to run on. 7.145.5. high_availability The virtual machine high availability configuration. If set, the virtual machine will be automatically restarted when it unexpectedly goes down. 7.145.6. initialization Reference to the virtual machine's initialization configuration. Note Since Red Hat Virtualization 4.1.8 this property can be cleared by sending an empty tag. For example, to clear the initialization attribute send a request like this: With a request body like this: <vm> <initialization/> </vm> The response to such a request, and requests with the header All-Content: true will still contain this attribute. 7.145.7. large_icon Virtual machine's large icon. Either set by user or refers to image set according to operating system. 7.145.8. lease Reference to the storage domain this virtual machine/template lease reside on. A virtual machine running with a lease requires checking while running that the lease is not taken by another host, preventing another instance of this virtual machine from running on another host. This provides protection against split-brain in highly available virtual machines. A template can also have a storage domain defined for a lease in order to have the virtual machines created from this template to be preconfigured with this storage domain as the location of the leases. 7.145.9. memory The virtual machine's memory, in bytes. For example, to update a virtual machine to contain 1 Gibibyte (GiB) of memory, send the following request: With the following request body: <vm> <memory>1073741824</memory> </vm> Memory hot plug is supported from Red Hat Virtualization 3.6 onwards. You can use the example above to increase memory while the virtual machine is in state up . The size increment must be dividable by the value of the HotPlugMemoryBlockSizeMb configuration value (256 MiB by default). If the memory size increment is not dividable by this value, the memory size change is only stored to run configuration. Each successful memory hot plug operation creates one or two new memory devices. Memory hot unplug is supported since Red Hat Virtualization 4.2 onwards. Memory hot unplug can only be performed when the virtual machine is in state up . Only previously hot plugged memory devices can be removed by the hot unplug operation. The requested memory decrement is rounded down to match sizes of a combination of previously hot plugged memory devices. The requested memory value is stored to run configuration without rounding. Note Memory in the example is converted to bytes using the following formula: 1 GiB = 2 30 bytes = 1073741824 bytes. Note Red Hat Virtualization Manager internally rounds values down to whole MiBs (1MiB = 2 20 bytes) 7.145.10. migration Reference to configuration of migration of a running virtual machine to another host. Note API for querying migration policy by ID returned by this method is not implemented yet. Use /ovirt-engine/api/options/MigrationPolicies to get a list of all migration policies with their IDs. 7.145.11. migration_downtime Maximum time the virtual machine can be non responsive during its live migration to another host in ms. Set either explicitly for the virtual machine or by engine-config -s DefaultMaximumMigrationDowntime=[value] 7.145.12. origin The origin of this virtual machine. Possible values: ovirt rhev vmware xen external hosted_engine managed_hosted_engine kvm physical_machine hyperv 7.145.13. placement_policy The configuration of the virtual machine's placement policy. This configuration can be updated to pin a virtual machine to one or more hosts. Note Virtual machines that are pinned to multiple hosts cannot be live migrated, but in the event of a host failure, any virtual machine configured to be highly available is automatically restarted on one of the other hosts to which the virtual machine is pinned. For example, to pin a virtual machine to two hosts, send the following request: With a request body like this: <vm> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <placement_policy> <hosts> <host> <name>Host1</name> </host> <host> <name>Host2</name> </host> </hosts> <affinity>pinned</affinity> </placement_policy> </vm> 7.145.14. small_icon Virtual machine's small icon. Either set by user or refers to image set according to operating system. 7.145.15. sso Reference to the Single Sign On configuration this virtual machine is configured for. The user can be automatically signed in the virtual machine's operating system when console is opened. 7.145.16. tpm_enabled If true , a TPM device is added to the virtual machine. By default the value is false . This property is only visible when fetching if "All-Content=true" header is set. Table 7.196. Links summary Name Type Summary cdroms Cdrom[ ] Reference to the CD-ROM devices attached to the template. cluster Cluster Reference to cluster the virtual machine belongs to. cpu_profile CpuProfile Reference to CPU profile used by this virtual machine. disk_attachments DiskAttachment[ ] Reference to the disks attached to the template. graphics_consoles GraphicsConsole[ ] Reference to the graphic consoles attached to the template. mediated_devices VmMediatedDevice[ ] Mediated devices configuration. nics Nic[ ] Reference to the network interfaces attached to the template. permissions Permission[ ] Reference to the user permissions attached to the template. quota Quota Reference to quota configuration set for this virtual machine. storage_domain StorageDomain Reference to storage domain the virtual machine belongs to. tags Tag[ ] Reference to the tags attached to the template. watchdogs Watchdog[ ] Reference to the watchdog devices attached to the template. 7.146. Io struct Table 7.197. Attributes summary Name Type Summary threads Integer 7.147. Ip struct Represents the IP configuration of a network interface. Table 7.198. Attributes summary Name Type Summary address String The text representation of the IP address. gateway String The address of the default gateway. netmask String The network mask. version IpVersion The version of the IP protocol. 7.147.1. address The text representation of the IP address. For example, an IPv4 address will be represented as follows: <ip> <address>192.168.0.1</address> ... </ip> An IPv6 address will be represented as follows: <ip> <address>2620:52:0:20f0:4216:7eff:feaa:1b50</address> ... </ip> 7.147.2. netmask The network mask. For IPv6 addresses the value is an integer in the range of 0-128, which represents the subnet prefix. 7.147.3. version The version of the IP protocol. Note From version 4.1 of the Manager this attribute will be optional, and when a value is not provided, it will be inferred from the value of the address attribute. 7.148. IpAddressAssignment struct Represents an IP address assignment for a network device. For a static boot protocol assignment, subnet mask and IP address (and optinally default gateway) must be provided in the IP configuration. Table 7.199. Attributes summary Name Type Summary assignment_method BootProtocol Sets the boot protocol used to assign the IP configuration for a network device. ip Ip Sets the IP configuration for a network device. 7.149. IpVersion enum Defines the values for the IP protocol version. Table 7.200. Values summary Name Summary v4 IPv4. v6 IPv6. 7.150. IscsiBond struct Table 7.201. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.202. Links summary Name Type Summary data_center DataCenter networks Network[ ] storage_connections StorageConnection[ ] 7.151. IscsiDetails struct Table 7.203. Attributes summary Name Type Summary address String disk_id String initiator String lun_mapping Integer password String paths Integer port Integer portal String product_id String serial String size Integer status String storage_domain_id String target String username String vendor_id String volume_group_id String 7.152. Job struct Represents a job, which monitors execution of a flow in the system. A job can contain multiple steps in a hierarchic structure. The steps can be processed in parallel, depends on the implementation of the flow. Table 7.204. Attributes summary Name Type Summary auto_cleared Boolean Indicates if the job should be cleared automatically after it was completed by the system. comment String Free text containing comments about this object. description String A human-readable description in plain text. end_time Date The end time of the job. external Boolean Indicates if the job is originated by an external system. id String A unique identifier. last_updated Date The last update date of the job. name String A human-readable name in plain text. start_time Date The start time of the job. status JobStatus The status of the job. 7.152.1. external Indicates if the job is originated by an external system. External jobs are managed externally, by the creator of the job. Table 7.205. Links summary Name Type Summary owner User The user who is the owner of the job. steps Step[ ] The steps of the job. 7.153. JobStatus enum Represents the status of the job. Table 7.206. Values summary Name Summary aborted The aborted job status. failed The failed job status. finished The finished job status. started The started job status. unknown The unknown job status. 7.153.1. aborted The aborted job status. This status is applicable for an external job that was forcibly aborted. 7.153.2. finished The finished job status. This status describes a completed job execution. 7.153.3. started The started job status. This status represents a job which is currently being executed. 7.153.4. unknown The unknown job status. This status represents jobs which their resolution is not known, i.e. jobs that were executed before the system was unexpectedly restarted. 7.154. KatelloErratum struct Type representing a Katello erratum. Table 7.207. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. issued Date The date when the Katello erratum was issued. name String A human-readable name in plain text. packages Package[ ] The list of packages which solve the issue reported by the Katello erratum. severity String The severity of the Katello erratum. solution String The solution for the issue described by the Katello erratum. summary String The summary of the Katello erratum. title String The title of the Katello erratum. type String The type of the Katello erratum. 7.154.1. severity The severity of the Katello erratum. The supported severities are moderate , important or critical . 7.154.2. type The type of the Katello erratum. The supported types are bugfix , enhancement or security . Table 7.208. Links summary Name Type Summary host Host Reference to the host that the Katello erratum is assigned to. vm Vm Reference to the virtual machine that the Katello erratum is assigned to. 7.155. KdumpStatus enum Table 7.209. Values summary Name Summary disabled enabled unknown 7.156. Kernel struct Table 7.210. Attributes summary Name Type Summary version Version 7.157. Ksm struct Table 7.211. Attributes summary Name Type Summary enabled Boolean merge_across_nodes Boolean 7.158. LinkLayerDiscoveryProtocolElement struct Represents an information element received by Link Layer Discovery Protocol (LLDP). IEEE 802.1AB defines type, length, value (TLV) as a "short, variable length encoding of an information element". This type represents such an information element. The attribute name is a human-readable string used to describe what the value is about, and may not be unique. The name is redundant, because it could be created from type and the optional oui and subtype . The purpose of name is to simplify the reading of the information element. The name of a property is exactly the same string which is used in IEEE 802.1AB chapter 8. Organizationally-specific information elements have the type of 127 and the attributes oui and subtype . For example, the XML representation of an information element may look like this: <link_layer_discovery_protocol_element> <name>Port VLAN Id</name> <oui>32962</oui> <properties> <property> <name>vlan id</name> <value>488</value> </property> <property> <name>vlan name</name> <value>v2-0488-03-0505</value> </property> </properties> <subtype>3</subtype> <type>127</type> </link_layer_discovery_protocol_element> Table 7.212. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. oui Integer The organizationally-unique identifier (OUI) encoded as an integer. properties Property[ ] Represents structured data transported by the information element as a list of name/value pairs. subtype Integer The organizationally-defined subtype encoded as an integer. type Integer The type of the LinkLayerDiscoveryProtocolElement encoded as an integer. 7.158.1. oui The organizationally-unique identifier (OUI) encoded as an integer. Only available if type is 127 . 7.158.2. subtype The organizationally-defined subtype encoded as an integer. Only available if type is 127 . 7.159. LogMaxMemoryUsedThresholdType enum Describes all maximum memory threshold types supported by the system. Table 7.213. Values summary Name Summary absolute_value_in_mb Absolute value threshold type. percentage Percentage threshold type. 7.159.1. absolute_value_in_mb Absolute value threshold type. When an absolute value is specified, an audit log event is logged if the free memory in MB falls below the value specified in LogMaxMemoryUsedThreshold . 7.159.2. percentage Percentage threshold type. When a percentage is specified, an audit log event is logged if the memory used is above the value specified in LogMaxMemoryUsedThreshold . 7.160. LogSeverity enum Enum representing a severity of an event. Table 7.214. Values summary Name Summary alert Alert severity. error Error severity. normal Normal severity. warning Warning severity. 7.160.1. alert Alert severity. Used to specify a condition that requires an immediate attention. 7.160.2. error Error severity. Used to specify that there is an error that needs to be examined. 7.160.3. normal Normal severity. Used for information events. 7.160.4. warning Warning severity. Used to warn something might be wrong. 7.161. LogicalUnit struct Table 7.215. Attributes summary Name Type Summary address String discard_max_size Integer The maximum number of bytes that can be discarded by the logical unit's underlying storage in a single operation. discard_zeroes_data Boolean True, if previously discarded blocks in the logical unit's underlying storage are read back as zeros. disk_id String id String lun_mapping Integer password String paths Integer port Integer portal String product_id String serial String size Integer status LunStatus storage_domain_id String target String username String vendor_id String volume_group_id String 7.161.1. discard_max_size The maximum number of bytes that can be discarded by the logical unit's underlying storage in a single operation. A value of 0 means that the device does not support discard functionality. Note This is the software limit, and not the hardware limit, as noted in the queue-sysfs documentation for discard_max_bytes . 7.161.2. discard_zeroes_data True, if previously discarded blocks in the logical unit's underlying storage are read back as zeros. For more information please see the queue-sysfs documentation for discard_zeroes_data . Important Since version 4.2.1 of the system, the support for this attribute has been removed as the sysfs file, discard_zeroes_data , was deprecated in the kernel. It is preserved for backwards compatibility, but the value will always be false . 7.162. LunStatus enum Table 7.216. Values summary Name Summary free unusable used 7.163. MDevType struct Mediated device is a software device that allows to divide physical device's resources. See Libvirt-MDEV for further details. Table 7.217. Attributes summary Name Type Summary available_instances Integer MDev type available instances count. description String MDev type description. human_readable_name String MDev type human readable name. name String MDev type name. 7.164. Mac struct Represents a MAC address of a virtual network interface. Table 7.218. Attributes summary Name Type Summary address String MAC address. 7.165. MacPool struct Represents a MAC address pool. Example of an XML representation of a MAC address pool: <mac_pool href="/ovirt-engine/api/macpools/123" id="123"> <name>Default</name> <description>Default MAC pool</description> <allow_duplicates>false</allow_duplicates> <default_pool>true</default_pool> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:E6</to> </range> </ranges> </mac_pool> Table 7.219. Attributes summary Name Type Summary allow_duplicates Boolean Defines whether duplicate MAC addresses are permitted in the pool. comment String Free text containing comments about this object. default_pool Boolean Defines whether this is the default pool. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. ranges Range[ ] Defines the range of MAC addresses for the pool. 7.165.1. allow_duplicates Defines whether duplicate MAC addresses are permitted in the pool. If not specified, defaults to false . 7.165.2. default_pool Defines whether this is the default pool. If not specified, defaults to false . 7.165.3. ranges Defines the range of MAC addresses for the pool. Multiple ranges can be defined. Table 7.220. Links summary Name Type Summary permissions Permission[ ] Returns a reference to the permissions that are associated with the MacPool. 7.166. MemoryOverCommit struct Table 7.221. Attributes summary Name Type Summary percent Integer 7.167. MemoryPolicy struct Logical grouping of memory-related properties of virtual machine-like entities. Table 7.222. Attributes summary Name Type Summary ballooning Boolean guaranteed Integer The amount of memory, in bytes, that is guaranteed to not be drained by the balloon mechanism. max Integer Maximum virtual machine memory, in bytes. over_commit MemoryOverCommit transparent_huge_pages TransparentHugePages 7.167.1. guaranteed The amount of memory, in bytes, that is guaranteed to not be drained by the balloon mechanism. The Red Hat Virtualization Manager internally rounds this value down to whole MiB (1MiB = 2 20 bytes). Note It can be updated while the virtual machine is running since Red Hat Virtualization 4.2 onwards, provided memory is updated in the same request as well, and the virtual machine is in state up . 7.167.2. max Maximum virtual machine memory, in bytes. The user provides the value in bytes, and the Red Hat Virtualization Manager rounds the value down to the nearest lower MiB value. For example, if the user enters a value of 1073741825 (1 GiB + 1 byte), then the Red Hat Virtualization Manager will truncate that value to the nearest lower MiB boundary: in this case 1073741824 (1 GiB). 7.168. MessageBrokerType enum Deprecated Message Broker type. Ignored, because the deployment of OpenStack Neutron agent is dropped since Red Hat Virtualization 4.4.0. Table 7.223. Values summary Name Summary qpid rabbit_mq 7.169. Method struct Table 7.224. Attributes summary Name Type Summary id SsoMethod 7.170. MigrateOnError enum Table 7.225. Values summary Name Summary do_not_migrate migrate migrate_highly_available 7.171. MigrationBandwidth struct Defines the bandwidth used by migration. Table 7.226. Attributes summary Name Type Summary assignment_method MigrationBandwidthAssignmentMethod The method used to assign the bandwidth. custom_value Integer Custom bandwidth in Mbps. 7.171.1. custom_value Custom bandwidth in Mbps. Will be applied only if the assignmentMethod attribute is custom . 7.172. MigrationBandwidthAssignmentMethod enum Defines how the migration bandwidth is assigned. Table 7.227. Values summary Name Summary auto Takes the bandwidth from the Quality of Service if the Quality of Service is defined. custom Custom defined bandwidth in Mbit/s. hypervisor_default Takes the value as configured on the hypervisor. 7.172.1. auto Takes the bandwidth from the Quality of Service if the Quality of Service is defined. If the Quality of Service is not defined the bandwidth is taken from the detected link speed being used. If nothing is detected, bandwidth falls back to the hypervisor_default value. 7.173. MigrationOptions struct The type for migration options. Table 7.228. Attributes summary Name Type Summary auto_converge InheritableBoolean bandwidth MigrationBandwidth The bandwidth that is allowed to be used by the migration. compressed InheritableBoolean custom_parallel_migrations Integer Specifies how many parallel migration connections to use. encrypted InheritableBoolean Specifies whether the migration should be encrypted or not. parallel_migrations_policy ParallelMigrationsPolicy Specifies whether and how to use parallel migration connections. 7.173.1. custom_parallel_migrations Specifies how many parallel migration connections to use. May be specified only when ParallelMigrationsPolicy is CUSTOM. The valid range of values is 2-255. The recommended range of values is 2-16. Table 7.229. Links summary Name Type Summary policy MigrationPolicy A reference to the migration policy, as defined using engine-config . 7.174. MigrationPolicy struct A policy describing how the migration is treated, such as convergence or how many parallel migrations are allowed. Table 7.230. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. 7.175. Network struct The type for a logical network. An example of the JSON representation of a logical network: { "network" : [ { "data_center" : { "href" : "/ovirt-engine/api/datacenters/123", "id" : "123" }, "stp" : "false", "mtu" : "0", "usages" : { "usage" : [ "vm" ] }, "name" : "ovirtmgmt", "description" : "Management Network", "href" : "/ovirt-engine/api/networks/456", "id" : "456", "link" : [ { "href" : "/ovirt-engine/api/networks/456/permissions", "rel" : "permissions" }, { "href" : "/ovirt-engine/api/networks/456/vnicprofiles", "rel" : "vnicprofiles" }, { "href" : "/ovirt-engine/api/networks/456/labels", "rel" : "labels" } ] } ] } An example of the XML representation of the same logical network: <network href="/ovirt-engine/api/networks/456" id="456"> <name>ovirtmgmt</name> <description>Management Network</description> <link href="/ovirt-engine/api/networks/456/permissions" rel="permissions"/> <link href="/ovirt-engine/api/networks/456/vnicprofiles" rel="vnicprofiles"/> <link href="/ovirt-engine/api/networks/456/labels" rel="labels"/> <data_center href="/ovirt-engine/api/datacenters/123" id="123"/> <stp>false</stp> <mtu>0</mtu> <usages> <usage>vm</usage> </usages> </network> Table 7.231. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. display Boolean Deprecated, 'usages' should be used to define network as a display network. dns_resolver_configuration DnsResolverConfiguration The DNS resolver configuration will be reported when retrieving the network using GET. id String A unique identifier. ip Ip Deprecated, not in use. mtu Integer Specifies the maximum transmission unit for the network. name String A human-readable name in plain text. port_isolation Boolean Defines whether communication between VMs running on the same host is blocked on this network. profile_required Boolean Specifies whether upon creation of the network a virtual network interface profile should automatically be created. required Boolean Defines whether the network is mandatory for all the hosts in the cluster. status NetworkStatus The status of the network. stp Boolean Specifies whether the spanning tree protocol is enabled for the network. usages NetworkUsage[ ] Defines a set of usage elements for the network. vdsm_name String The name of the network used on the host. vlan Vlan A VLAN tag. 7.175.1. dns_resolver_configuration The DNS resolver configuration will be reported when retrieving the network using GET. It is optional both when creating a new network or updating existing one. 7.175.2. port_isolation Defines whether communication between VMs running on the same host is blocked on this network. Applies only to VM networks. It is on the network administrator to ensure that the communication between multiple hosts is blocked. This attribute can be set only on network creation and cannot be edited. When the value is not set, communication between VMs running on the same host is allowed. 7.175.3. required Defines whether the network is mandatory for all the hosts in the cluster. In case a 'required' operational network is omitted from a host, the host will be marked as non_operational , 7.175.4. status The status of the network. non_operational if the network defined as 'required' and omitted from any active cluster host. operational otherwise. 7.175.5. usages Defines a set of usage elements for the network. For example, users can specify that the network is to be used for virtual machine traffic and also for display traffic with the vm and display values. 7.175.6. vdsm_name The name of the network used on the host. This alternative name is automatically generated by VDSM when the network name is found unsuitable to serve as a bridge name on the host. Unsuitable names contain spaces, special characters or are longer than 15 characters and are replaced with a UUID on the host. This parameter is read-only. Setting it will have no effect. Table 7.232. Links summary Name Type Summary cluster Cluster A reference to the cluster this network is attached to. data_center DataCenter A reference to the data center that the network is a member of. external_provider OpenStackNetworkProvider An optional reference to the OpenStack network provider on which the network is created. external_provider_physical_network Network An optional reference to a network that should be used for physical network access. network_labels NetworkLabel[ ] A reference to the labels assigned to the network. permissions Permission[ ] A reference to the permissions of the network. qos Qos Reference to quality of service. vnic_profiles VnicProfile[ ] A reference to the profiles of the network. 7.175.7. cluster A reference to the cluster this network is attached to. Will be filled only if the network is accessed from the cluster level. 7.175.8. external_provider An optional reference to the OpenStack network provider on which the network is created. If it is specified when a network is created, a matching OpenStack network will be also created. 7.175.9. external_provider_physical_network An optional reference to a network that should be used for physical network access. Valid only if external_provider is specified. 7.176. NetworkAttachment struct Describes how a host connects to a network. An XML representation of a network attachment on a host: <network_attachment href="/ovirt-engine/api/hosts/123/nics/456/networkattachments/789" id="789"> <network href="/ovirt-engine/api/networks/234" id="234"/> <host_nic href="/ovirt-engine/api/hosts/123/nics/123" id="123"/> <in_sync>true</in_sync> <ip_address_assignments> <ip_address_assignment> <assignment_method>static</assignment_method> <ip> <address>192.168.122.39</address> <gateway>192.168.122.1</gateway> <netmask>255.255.255.0</netmask> <version>v4</version> </ip> </ip_address_assignment> </ip_address_assignments> <reported_configurations> <reported_configuration> <name>mtu</name> <expected_value>1500</expected_value> <actual_value>1500</actual_value> <in_sync>true</in_sync> </reported_configuration> <reported_configuration> <name>bridged</name> <expected_value>true</expected_value> <actual_value>true</actual_value> <in_sync>true</in_sync> </reported_configuration> ... </reported_configurations> </network_attachment> The network element, with either a name or an id , is required in order to attach a network to a network interface card (NIC). For example, to attach a network to a host network interface card, send a request like this: With a request body like this: <networkattachment> <network id="234"/> </networkattachment> To attach a network to a host, send a request like this: With a request body like this: <network_attachment> <network id="234"/> <host_nic id="456"/> </network_attachment> The ip_address_assignments and properties elements are updatable post-creation. For example, to update a network attachment, send a request like this: With a request body like this: <network_attachment> <ip_address_assignments> <ip_address_assignment> <assignment_method>static</assignment_method> <ip> <address>7.1.1.1</address> <gateway>7.1.1.2</gateway> <netmask>255.255.255.0</netmask> <version>v4</version> </ip> </ip_address_assignment> </ip_address_assignments> </network_attachment> To detach a network from the network interface card send a request like this: Important Changes to network attachment configuration must be explicitly committed. An XML representation of a network attachment's properties sub-collection: <network_attachment> <properties> <property> <name>bridge_opts</name> <value> forward_delay=1500 group_fwd_mask=0x0 multicast_snooping=1 </value> </property> </properties> ... </network_attachment> Table 7.233. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. dns_resolver_configuration DnsResolverConfiguration DNS resolver configuration will be reported when retrieving the network attachment using GET. id String A unique identifier. in_sync Boolean ip_address_assignments IpAddressAssignment[ ] The IP configuration of the network. name String A human-readable name in plain text. properties Property[ ] Defines custom properties for the network configuration. reported_configurations ReportedConfiguration[ ] A read-only list of configuration properties. 7.176.1. dns_resolver_configuration DNS resolver configuration will be reported when retrieving the network attachment using GET. It is optional when creating a new network attachment or updating an existing one. 7.176.2. properties Defines custom properties for the network configuration. Bridge options have the set name of bridge_opts. Separate multiple entries with a whitespace character. The following keys are valid for bridge_opts : Name Default value forward_delay 1500 gc_timer 3765 group_addr 1:80:c2:0:0:0 group_fwd_mask 0x0 hash_elasticity 4 hash_max 512 hello_time 200 hello_timer 70 max_age 2000 multicast_last_member_count 2 multicast_last_member_interval 100 multicast_membership_interval 26000 multicast_querier 0 multicast_querier_interval 25500 multicast_query_interval 13000 multicast_query_response_interval 1000 multicast_query_use_ifaddr 0 multicast_router 1 multicast_snooping 1 multicast_startup_query_count 2 multicast_startup_query_interval 3125 Table 7.234. Links summary Name Type Summary host Host host_nic HostNic A reference to the host network interface. network Network A reference to the network that the interface is attached to. qos Qos 7.177. NetworkConfiguration struct Table 7.235. Attributes summary Name Type Summary dns Dns nics Nic[ ] 7.178. NetworkFilter struct Network filters filter packets sent to and from the virtual machine's NIC according to defined rules. There are several types of network filters supported based on libvirt. For more details about the different network filters see here . The default Network Filter is based on network type and configuration. VM network's default filter is vdsm-no-mac-spoof if EnableMACAntiSpoofingFilterRules is True, otherwise the filter is not configured, for OVN networks the filter is not configured. In addition to libvirt's network filters, there are two additional network filters: The first is called vdsm-no-mac-spoofing and is composed of no-mac-spoofing and no-arp-mac-spoofing . The second is called ovirt-no-filter and is used when no network filter is to be defined for the virtual machine's NIC. The ovirt-no-filter network filter is only used for internal implementation, and does not exist on the NICs. This is a example of the XML representation: <network_filter id="00000019-0019-0019-0019-00000000026c"> <name>example-filter</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> If any part of the version is not present, it is represented by -1. Table 7.236. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. version Version The minimum supported version of a specific NetworkFilter. 7.178.1. version The minimum supported version of a specific NetworkFilter. This is the version that the NetworkFilter was first introduced in. 7.179. NetworkFilterParameter struct Parameter for the network filter . See Libvirt-Filters for further details. This is a example of the XML representation: <network_filter_parameter id="123"> <name>IP</name> <value>10.0.1.2</value> </network_filter_parameter> Table 7.237. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. value String Represents the value of the parameter. Table 7.238. Links summary Name Type Summary nic Nic The virtual machine NIC the parameter is assiciated to. 7.180. NetworkLabel struct Represents a label which can be added to a host network interface and to a network. The label binds the network to the host network interface by the label id . Table 7.239. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.240. Links summary Name Type Summary host_nic HostNic A reference to the host network interface which contains this label. network Network A reference to the network which contains this label. 7.181. NetworkPluginType enum Network plug-in type. Specifies the provider driver implementation on the host. Since version 4.2 of the Red Hat Virtualization Manager, this type has been deprecated in favour of the external_plugin_type attribute of the OpenStackNetworkProvider type. Table 7.241. Values summary Name Summary open_vswitch Open vSwitch. 7.181.1. open_vswitch Open vSwitch. Specifies that Open vSwitch based driver implementation should be used for this provider. Since version 4.2 of the Red Hat Virtualization Manager, this value has been deprecated. Use the string open_vswitch in the OpenStackNetworkProvider.external_plugin_type attribute instead. 7.182. NetworkStatus enum Table 7.242. Values summary Name Summary non_operational operational 7.183. NetworkUsage enum This type indicates the purpose that the network is used for in the cluster. Table 7.243. Values summary Name Summary default_route The default gateway and the DNS resolver configuration of the host will be taken from this network. display The network will be used for SPICE and VNC traffic. gluster The network will be used for Gluster (bricks) data traffic. management The network will be used for communication between the Red Hat Virtualization Manager and the nodes. migration The network will be used for virtual machine migration. vm 7.183.1. default_route The default gateway and the DNS resolver configuration of the host will be taken from this network. If this network is attached to the host, then the DNS resolver configuration will be taken from the dns_resolver_configuration attribute of the network attachment. If there is no dns_resolver_configuration attribute in this network attachment, then they will be taken from the dns_resolver_configuration of the network itself. If dns_resolver_configuration attribute isn't present even there, DNS resolver configuration won't be set. If you set this flag on a network, then the the default gateway for the host will be taken from the gateway attribute of the ip_address_assignment of the network attachment. 7.183.2. management The network will be used for communication between the Red Hat Virtualization Manager and the nodes. This is the network where the ovirtmgmt bridge will be created. 7.184. NfsProfileDetail struct Table 7.244. Attributes summary Name Type Summary nfs_server_ip String profile_details ProfileDetail[ ] 7.185. NfsVersion enum Table 7.245. Values summary Name Summary auto v3 v4 v4_0 NFS 4. v4_1 v4_2 NFS 4. 7.185.1. v4_0 NFS 4.0. 7.185.2. v4_2 NFS 4.2. 7.186. Nic struct Represents a virtual machine NIC. For example, the XML representation of a NIC will look like this: <nic href="/ovirt-engine/api/vms/123/nics/456" id="456"> <name>nic1</name> <vm href="/ovirt-engine/api/vms/123" id="123"/> <interface>virtio</interface> <linked>true</linked> <mac> <address>02:00:00:00:00:00</address> </mac> <plugged>true</plugged> <vnic_profile href="/ovirt-engine/api/vnicprofiles/789" id="789"/> </nic> Table 7.246. Attributes summary Name Type Summary boot_protocol BootProtocol Defines how an IP address is assigned to the NIC. comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. interface NicInterface The type of driver used for the NIC. linked Boolean Defines if the NIC is linked to the virtual machine. mac Mac The MAC address of the interface. name String A human-readable name in plain text. on_boot Boolean Defines if the network interface should be activated upon operation system startup. plugged Boolean Defines if the NIC is plugged in to the virtual machine. synced Boolean Defines if the NIC configuration on the virtual machine is synced with the configuration represented by engine. Table 7.247. Links summary Name Type Summary instance_type InstanceType Optionally references to an instance type the device is used by. network Network A reference to the network that the interface should be connected to. network_attachments NetworkAttachment[ ] A link to a collection of network attachments that are associated with the host NIC. network_filter_parameters NetworkFilterParameter[ ] A link to the network filter parameters. network_labels NetworkLabel[ ] A link to a collection of network labels that are associated with the host NIC. reported_devices ReportedDevice[ ] A link to a collection of reported devices that are associated with the virtual network interface. statistics Statistic[ ] A link to the statistics for the NIC. template Template Optionally references to a template the device is used by. virtual_function_allowed_labels NetworkLabel[ ] A link to a collection of network labels that are allowed to be attached to the virtual functions of an SR-IOV NIC. virtual_function_allowed_networks Network[ ] A link to a collection of networks that are allowed to be attached to the virtual functions of an SR-IOV NIC. vm Vm Do not use this element, use vms instead. vms Vm[ ] References to the virtual machines that are using this device. vnic_profile VnicProfile A link to an associated virtual network interface profile. 7.186.1. network A reference to the network that the interface should be connected to. A blank network ID is allowed. Usage of this element for creating or updating a NIC is deprecated; use vnic_profile instead. It is preserved because it is still in use by the initialization element, as a holder for IP addresses and other network details. 7.186.2. vms References to the virtual machines that are using this device. A device may be used by several virtual machines; for example, a shared disk my be used simultaneously by two or more virtual machines. 7.187. NicConfiguration struct The type describes the configuration of a virtual network interface. Table 7.248. Attributes summary Name Type Summary boot_protocol BootProtocol IPv4 boot protocol. ip Ip IPv4 address details. ipv6 Ip IPv6 address details. ipv6_boot_protocol BootProtocol IPv6 boot protocol. name String Network interface name. on_boot Boolean Specifies whether the network interface should be activated on the virtual machine guest operating system boot. 7.188. NicInterface enum Defines the options for an emulated virtual network interface device model. Table 7.249. Values summary Name Summary e1000 e1000. e1000e e1000e. pci_passthrough PCI Passthrough. rtl8139 rtl8139. rtl8139_virtio Dual mode rtl8139, VirtIO. spapr_vlan sPAPR VLAN. virtio VirtIO. 7.189. NicStatus enum Network interface card status. Table 7.250. Values summary Name Summary down The NIC is down and cannot be accessed. up The NIC is up and can be accessed. 7.190. NotifiableEvent enum Type representing a subset of events in the Red Hat Virtualization server: those which a user may subscribe to receive a notification about. Table 7.251. Values summary Name Summary cluster_alert_ha_reservation HA Reservation check has failed cluster_alert_ha_reservation_down HA Reservation check has passed dwh_error ETL Service Error dwh_stopped ETL Service Stopped engine_backup_completed Engine backup completed successfully engine_backup_failed Engine backup failed engine_backup_started Engine backup started engine_ca_certification_has_expired Engine CA's certification has expired engine_ca_certification_is_about_to_expire Engine CA's certification is about to expire engine_certification_has_expired Engine's certification has expired engine_certification_is_about_to_expire Engine's certification is about to expire engine_stop Engine has stopped faulty_multipaths_on_host Faulty multipath paths on host gluster_brick_status_changed Detected change in status of brick gluster_hook_add_failed Failed to add Gluster Hook on conflicting servers gluster_hook_added Added Gluster Hook gluster_hook_conflict_detected Detected conflict in Gluster Hook gluster_hook_detected_delete Detected removal of Gluster Hook gluster_hook_detected_new Detected new Gluster Hook gluster_hook_disable Gluster Hook Disabled gluster_hook_disable_failed Failed to Disable Gluster Hook gluster_hook_enable Gluster Hook Enabled gluster_hook_enable_failed Failed to Enable Gluster Hook gluster_hook_remove_failed Failed to remove Gluster Hook from cluster gluster_hook_removed Removed Gluster Hook gluster_server_add_failed Failed to Add Gluster Server gluster_server_remove Gluster Server Removed gluster_server_remove_failed Failed to Remove Gluster Server gluster_service_restart_failed Failed to re-start Gluster Service gluster_service_restarted Gluster Service re-started gluster_service_start_failed Failed to start Gluster service gluster_service_started Gluster Service started gluster_service_stop_failed Failed to stop Gluster service gluster_service_stopped Gluster Service stopped gluster_volume_add_brick Gluster Volume brick(s) added gluster_volume_add_brick_failed Failed to add brick(s) on Gluster Volume gluster_volume_all_snapshots_delete_failed Failed to delete snapshots on the volume gluster_volume_all_snapshots_deleted All the snapshots deleted on the volume gluster_volume_brick_replaced Gluster Volume Brick Replaced gluster_volume_confirmed_space_low Low space for volume confirmed gluster_volume_create Gluster Volume Created gluster_volume_create_failed Gluster Volume could not be created gluster_volume_delete Gluster Volume deleted gluster_volume_delete_failed Gluster Volume could not be deleted gluster_volume_migrate_brick_data_finished Gluster Volume migration of data for remove brick finished gluster_volume_option_added Gluster Volume Option added gluster_volume_option_modified Gluster Volume Option modified gluster_volume_option_set_failed Gluster Volume Option could not be set gluster_volume_options_reset Gluster Volume Options reset gluster_volume_options_reset_all All the Gluster Volume Options reset gluster_volume_options_reset_failed Gluster Volume Options could not be reset gluster_volume_profile_start Gluster Volume Profile started gluster_volume_profile_start_failed Failed to start Gluster Volume Profile gluster_volume_profile_stop Gluster Volume Profile stopped gluster_volume_profile_stop_failed Failed to stop Gluster Volume Profile gluster_volume_rebalance_finished Gluster Volume rebalance finished gluster_volume_rebalance_not_found_from_cli Could not find information for rebalance on volume from CLI. gluster_volume_rebalance_start Gluster Volume Rebalance started gluster_volume_rebalance_start_detected_from_cli Detected start of rebalance on gluster volume from CLI gluster_volume_rebalance_start_failed Gluster Volume Rebalance could not be started gluster_volume_rebalance_stop Gluster Volume Rebalance stopped gluster_volume_rebalance_stop_failed Gluster Volume Rebalance could not be stopped gluster_volume_remove_bricks Gluster Volume Bricks Removed gluster_volume_remove_bricks_failed Gluster Volume Bricks could not be removed gluster_volume_remove_bricks_stop Stopped removing bricks from Gluster Volume gluster_volume_remove_bricks_stop_failed Failed to stop remove bricks from Gluster Volume gluster_volume_replace_brick_failed Gluster Volume Replace Brick Failed gluster_volume_replace_brick_start Gluster Volume Replace Brick Started gluster_volume_replace_brick_start_failed Gluster Volume Replace Brick could not be started gluster_volume_snapshot_activate_failed Failed to activate snapshot on the volume gluster_volume_snapshot_activated Snapshot activated on the volume gluster_volume_snapshot_create_failed Could not create snapshot for volume USD{glusterVolumeName} on cluster USD{clusterName}. gluster_volume_snapshot_created Snapshot USD{snapname} created for volume USD{glusterVolumeName} on cluster USD{clusterName}. gluster_volume_snapshot_deactivate_failed Failed to de-activate snapshot on the volume gluster_volume_snapshot_deactivated Snapshot de-activated on the volume gluster_volume_snapshot_delete_failed Failed to delete snapshot on volume gluster_volume_snapshot_deleted Snapshot deleted on volume gluster_volume_snapshot_restore_failed Failed to restore snapshot on the volume gluster_volume_snapshot_restored Snapshot restore on the volume gluster_volume_start Gluster volume started gluster_volume_start_failed Gluster Volume could not be started gluster_volume_stop Gluster volume stopped gluster_volume_stop_failed Gluster Volume could not be stopped ha_vm_failed Highly-Available VM failed ha_vm_restart_failed Highly-Available VM restart failed host_activate_failed Failed to activate Host host_activate_manual_ha Host was activated, but the Hosted Engine HA service may still be in maintenance mode host_approve_failed Failed to approve Host host_bond_slave_state_down Host's slave of bond changed state to down host_certificate_has_invalid_san Host's certificate contains invalid subject alternative name (SAN) host_certification_has_expired Host's certification has expired host_certification_is_about_to_expire Host's certification is about to expire host_failure Host is non responsive host_high_cpu_use Host cpu usage exceeded defined threshold host_high_mem_use Host memory usage exceeded defined threshold host_high_swap_use Host swap memory usage exceeded defined threshold host_initiated_run_vm_failed Failed to restart VM on a different host host_install_failed Host installation failed host_interface_high_network_use Host network interface usage exceeded defined threshold host_interface_state_down Host's interface changed state to down host_low_mem Host free memory is under defined threshold host_low_swap Host free swap memory is under defined threshold host_recover_failed Host failed to recover host_set_nonoperational Host state was set to non-operational host_set_nonoperational_domain Host state was set to non-operational due to inaccessible Storage Domain host_set_nonoperational_iface_down Host state was set to non-operational due to a missing Interface host_slow_storage_response_time Slow storage response time host_time_drift_alert Host has time-drift host_untrusted Host state was set to non-operational. host_updates_are_available Host has available updates host_updates_are_available_with_packages Host has available packages to update importexport_import_template_from_trusted_to_untrusted Template imported from trusted cluster into non-trusted cluster importexport_import_template_from_untrusted_to_trusted Template imported from non-trusted cluster into trusted cluster importexport_import_vm_from_trusted_to_untrusted Import VM from trusted cluster into non-trusted cluster importexport_import_vm_from_untrusted_to_trusted Import VM from non-trusted cluster into trusted cluster irs_confirmed_disk_space_low Confirmed low disk space irs_disk_space_low Low disk space irs_disk_space_low_error Critically low disk space irs_failure Failed to access Storage mac_address_is_external VM with external MAC address multipath_devices_without_valid_paths_on_host Multipath devices without valid paths on host network_update_display_for_cluster_with_active_vm Display network was updated on cluster with an active VM network_update_display_for_host_with_active_vm Display network was updated on host with an active VM no_faulty_multipaths_on_host No faulty multipath paths on host number_of_lvs_on_storage_domain_exceeded_threshold Storage Domain's number of LVs exceeded threshold remove_gluster_volume_bricks_not_found_from_cli Could not find information for remove brick on volume from CLI. start_removing_gluster_volume_bricks Started removing bricks from Volume start_removing_gluster_volume_bricks_detected_from_cli Detected start of brick removal for bricks on volume from CLI start_removing_gluster_volume_bricks_failed Could not remove volume bricks system_change_storage_pool_status_no_host_for_spm Failed electing an SPM for the Data-Center system_deactivated_storage_domain Storage Domain state was set to inactive user_add_vm_from_trusted_to_untrusted A non-trusted VM was created from trusted Template user_add_vm_from_untrusted_to_trusted A trusted VM was created from non-trusted Template user_add_vm_template_from_trusted_to_untrusted A non-trusted Template was created from trusted VM user_add_vm_template_from_untrusted_to_trusted A trusted Template was created from non-trusted VM user_host_maintenance Host was switched to Maintenance Mode user_host_maintenance_manual_ha Host was switched to Maintenance Mode, but Hosted Engine HA maintenance mode could not be enabled user_host_maintenance_migration_failed Failed to switch Host to Maintenance mode user_update_vm_from_trusted_to_untrusted VM moved from trusted cluster to non-trusted cluster user_update_vm_from_untrusted_to_trusted VM moved from non-trusted cluster to trusted cluster user_update_vm_template_from_trusted_to_untrusted Template moved from trusted cluster to non-trusted cluster user_update_vm_template_from_untrusted_to_trusted Template moved from a non-trusted cluster to a trusted cluster vm_console_connected VM console connected vm_console_disconnected VM console disconnected vm_down_error VM is down with error vm_failure VM cannot be found on Host vm_migration_failed Migration failed vm_migration_start Starting migration of VM vm_migration_to_server_failed Migration of VM to a destination host failed vm_not_responding VM is not responding vm_paused VM has been paused vm_paused_eio VM has been paused due to a storage I/O error vm_paused_enospc VM has been paused due to lack of storage space vm_paused_eperm VM has been paused due to storage read/write permissions problem vm_paused_error VM has been paused due to unknown storage error vm_recovered_from_pause_error VM has recovered from paused back to up vm_set_ticket VM console session initiated vm_status_restored VM status restored 7.190.1. gluster_volume_rebalance_not_found_from_cli Could not find information for rebalance on volume from CLI. Marking it as unknown. 7.190.2. host_untrusted Host state was set to non-operational. Host is untrusted by the attestation service 7.190.3. remove_gluster_volume_bricks_not_found_from_cli Could not find information for remove brick on volume from CLI. Marking it as unknown. 7.191. NotificationMethod enum Type representing the notification method for an event subscription. Currently only SMTP is supported by the API In the future support for SNMP notifications may be added. Table 7.252. Values summary Name Summary smtp Notification by e-mail. snmp Notification by SNMP. 7.191.1. smtp Notification by e-mail. Event-subscriptions with SMTP notification method will contain an email address in the address field. 7.191.2. snmp Notification by SNMP. Event-subscriptions with SNMP notification method will contain an SNMP address in the address field. 7.192. NumaNode struct Represents a physical NUMA node. Example XML representation: <host_numa_node href="/ovirt-engine/api/hosts/0923f1ea/numanodes/007cf1ab" id="007cf1ab"> <cpu> <cores> <core> <index>0</index> </core> </cores> </cpu> <index>0</index> <memory>65536</memory> <node_distance>40 20 40 10</node_distance> <host href="/ovirt-engine/api/hosts/0923f1ea" id="0923f1ea"/> </host_numa_node> Table 7.253. Attributes summary Name Type Summary comment String Free text containing comments about this object. cpu Cpu description String A human-readable description in plain text. id String A unique identifier. index Integer memory Integer Memory of the NUMA node in MB. name String A human-readable name in plain text. node_distance String Table 7.254. Links summary Name Type Summary host Host statistics Statistic[ ] Each host NUMA node resource exposes a statistics sub-collection for host NUMA node specific statistics. 7.192.1. statistics Each host NUMA node resource exposes a statistics sub-collection for host NUMA node specific statistics. An example of an XML representation: <statistics> <statistic href="/ovirt-engine/api/hosts/123/numanodes/456/statistics/789" id="789"> <name>memory.total</name> <description>Total memory</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>25165824000</datum> </value> </values> <host_numa_node href="/ovirt-engine/api/hosts/123/numanodes/456" id="456" /> </statistic> ... </statistics> Note This statistics sub-collection is read-only. The following list shows the statistic types for a host NUMA node: Name Description memory.total Total memory in bytes on the NUMA node. memory.used Memory in bytes used on the NUMA node. memory.free Memory in bytes free on the NUMA node. cpu.current.user Percentage of CPU usage for user slice. cpu.current.system Percentage of CPU usage for system. cpu.current.idle Percentage of idle CPU usage. 7.193. NumaNodePin struct Represents the pinning of a virtual NUMA node to a physical NUMA node. Table 7.255. Attributes summary Name Type Summary host_numa_node NumaNode Deprecated. index Integer The index of a physical NUMA node to which the virtual NUMA node is pinned. pinned Boolean Deprecated. 7.193.1. host_numa_node Deprecated. Has no function. 7.193.2. pinned Deprecated. Should always be true . 7.194. NumaTuneMode enum Table 7.256. Values summary Name Summary interleave preferred strict 7.195. OpenStackImage struct Table 7.257. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.258. Links summary Name Type Summary openstack_image_provider OpenStackImageProvider 7.196. OpenStackImageProvider struct Table 7.259. Attributes summary Name Type Summary authentication_url String Defines the external provider authentication URL address. comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. password String Defines password for the user during the authentication process. properties Property[ ] Array of provider name/value properties. requires_authentication Boolean Defines whether provider authentication is required or not. tenant_name String Defines the tenant name for OpenStack Identity API v2. url String Defines URL address of the external provider. username String Defines user name to be used during authentication process. 7.196.1. requires_authentication Defines whether provider authentication is required or not. If authentication is required, both username and password attributes will be used during authentication. 7.196.2. tenant_name Defines the tenant name for OpenStack Identity API v2.0. Table 7.260. Links summary Name Type Summary certificates Certificate[ ] images OpenStackImage[ ] 7.197. OpenStackNetwork struct Table 7.261. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.262. Links summary Name Type Summary openstack_network_provider OpenStackNetworkProvider 7.198. OpenStackNetworkProvider struct Table 7.263. Attributes summary Name Type Summary agent_configuration AgentConfiguration Deprecated Agent configuration settings. authentication_url String Defines the external provider authentication URL address. auto_sync Boolean Indicates if the networks of this provider are automatically synchronized. comment String Free text containing comments about this object. description String A human-readable description in plain text. external_plugin_type String Network plug-in type. id String A unique identifier. name String A human-readable name in plain text. password String Defines password for the user during the authentication process. plugin_type NetworkPluginType Network plug-in type. project_domain_name String Defines the project's domain name for OpenStack Identity API v3. project_name String Defines the project name for OpenStack Identity API v3. properties Property[ ] Array of provider name/value properties. read_only Boolean Indicates whether the provider is read-only. requires_authentication Boolean Defines whether provider authentication is required or not. tenant_name String Defines the tenant name for OpenStack Identity API v2. type OpenStackNetworkProviderType The type of provider. unmanaged Boolean Indicates whether the provider is unmanaged by Red Hat Virtualization. url String Defines URL address of the external provider. user_domain_name String Defines the domain name of the username in ExternalProvider for OpenStack Identity API v3. username String Defines user name to be used during authentication process. 7.198.1. agent_configuration Deprecated Agent configuration settings. Ignored, because the deployment of OpenStack Neutron agent is dropped since Red Hat Virtualization 4.4.0. 7.198.2. auto_sync Indicates if the networks of this provider are automatically synchronized. If true , the networks of this provider are automatically and cyclically synchronized to Red Hat Virtualization in the background. This means that all new networks of this provider are imported, and all discarded networks are removed from all clusters that have this external provider as the default provider. If the name of a network is changed on the provider, the change is synchronized to the network entity in Red Hat Virtualization. Furthermore, if a new cluster that has the provider as the default provider is added, already imported networks are attached to this new cluster during synchronization. The automatically initiated import triggers the following steps: The networks of the external provider will be imported to every data center in the data centers of the clusters that have that external provider as the default provider. A vNIC profile will be created for each involved data center and network. The networks will be assigned to each cluster that has that external provider as the default provider. All users are allowed to use the new vNIC Profile. The default is false for backwards compatibility. 7.198.3. external_plugin_type Network plug-in type. This attribute allows you to choose the correct provider driver on the host when an external NIC is added or modified. If automated installation of the driver is supported (only available for some predefined implementations, for example ovirt-provider-ovn ), this attribute will also allow the system to decide which driver implementation to install on newly added hosts. 7.198.4. plugin_type Network plug-in type. Since version 4.2 of the Red Hat Virtualization Manager, this attribute has been deprecated in favour of external_plugin_type . This attribute is only valid for providers of type open_vswitch , and will only be returned when the value of the external_plugin_type attribute value is equal to open_vswitch . If both plugin_type and external_plugin_type are specified during an update, the value of plugin_type will be ignored. For external providers this value will not be shown and will be ignored during update requests. 7.198.5. read_only Indicates whether the provider is read-only. A read-only provider does not allow adding, modifying, or deleting of networks or subnets. Port-related operations are allowed, as they are required for the provisioning of virtual NICs. 7.198.6. requires_authentication Defines whether provider authentication is required or not. If authentication is required, both username and password attributes will be used during authentication. 7.198.7. tenant_name Defines the tenant name for OpenStack Identity API v2.0. 7.198.8. unmanaged Indicates whether the provider is unmanaged by Red Hat Virtualization. If true , authentication and subnet control are entirely left to the external provider and are unmanaged by Red Hat Virtualization. The default is false for backwards compatibility. Table 7.264. Links summary Name Type Summary certificates Certificate[ ] Reference to the certificates list. networks OpenStackNetwork[ ] Reference to the OpenStack networks list. subnets OpenStackSubnet[ ] Reference to the OpenStack networks subnets list. 7.199. OpenStackNetworkProviderType enum The OpenStack network provider can either be implemented by OpenStack Neutron, in which case the Neutron agent is automatically installed on the hosts, or it can be an external provider implementing the OpenStack API, in which case the virtual interface driver is a custom solution installed manually. Table 7.265. Values summary Name Summary external Indicates that the provider is an external one, implementing the OpenStack Neutron API. neutron Indicates that the provider is OpenStack Neutron. 7.199.1. external Indicates that the provider is an external one, implementing the OpenStack Neutron API. The virtual interface driver in this case is implemented by the external provider. 7.199.2. neutron Indicates that the provider is OpenStack Neutron. The standard OpenStack Neutron agent is used as the virtual interface driver. 7.200. OpenStackProvider struct Table 7.266. Attributes summary Name Type Summary authentication_url String Defines the external provider authentication URL address. comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. password String Defines password for the user during the authentication process. properties Property[ ] Array of provider name/value properties. requires_authentication Boolean Defines whether provider authentication is required or not. tenant_name String Defines the tenant name for OpenStack Identity API v2. url String Defines URL address of the external provider. username String Defines user name to be used during authentication process. 7.200.1. requires_authentication Defines whether provider authentication is required or not. If authentication is required, both username and password attributes will be used during authentication. 7.200.2. tenant_name Defines the tenant name for OpenStack Identity API v2.0. 7.201. OpenStackSubnet struct Table 7.267. Attributes summary Name Type Summary cidr String Defines network CIDR. comment String Free text containing comments about this object. description String A human-readable description in plain text. dns_servers String[ ] Defines a list of DNS servers. gateway String Defines IP gateway. id String A unique identifier. ip_version String Defines IP version. name String A human-readable name in plain text. 7.201.1. ip_version Defines IP version. Values can be v4' for IPv4 or `v6 for IPv6. Table 7.268. Links summary Name Type Summary openstack_network OpenStackNetwork Reference to the service managing the OpenStack network. 7.202. OpenStackVolumeProvider struct Openstack Volume (Cinder) integration has been replaced by Managed Block Storage. Table 7.269. Attributes summary Name Type Summary authentication_url String Defines the external provider authentication URL address. comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. password String Defines password for the user during the authentication process. properties Property[ ] Array of provider name/value properties. requires_authentication Boolean Defines whether provider authentication is required or not. tenant_name String Defines the tenant name for OpenStack Identity API v2. url String Defines URL address of the external provider. username String Defines user name to be used during authentication process. 7.202.1. requires_authentication Defines whether provider authentication is required or not. If authentication is required, both username and password attributes will be used during authentication. 7.202.2. tenant_name Defines the tenant name for OpenStack Identity API v2.0. Table 7.270. Links summary Name Type Summary authentication_keys OpenstackVolumeAuthenticationKey[ ] certificates Certificate[ ] data_center DataCenter volume_types OpenStackVolumeType[ ] 7.203. OpenStackVolumeType struct Openstack Volume (Cinder) integration has been replaced by Managed Block Storage. Table 7.271. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. properties Property[ ] Table 7.272. Links summary Name Type Summary openstack_volume_provider OpenStackVolumeProvider 7.204. OpenstackVolumeAuthenticationKey struct Openstack Volume (Cinder) integration has been replaced by Managed Block Storage. Table 7.273. Attributes summary Name Type Summary comment String Free text containing comments about this object. creation_date Date description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. usage_type OpenstackVolumeAuthenticationKeyUsageType uuid String value String Table 7.274. Links summary Name Type Summary openstack_volume_provider OpenStackVolumeProvider 7.205. OpenstackVolumeAuthenticationKeyUsageType enum Openstack Volume (Cinder) integration has been replaced by Managed Block Storage. Table 7.275. Values summary Name Summary ceph 7.206. OperatingSystem struct Information describing the operating system. This is used for both virtual machines and hosts. Table 7.276. Attributes summary Name Type Summary boot Boot Configuration of the boot sequence. cmdline String Custom kernel parameters for starting the virtual machine if Linux operating system is used. custom_kernel_cmdline String A custom part of the host kernel command line. initrd String Path to custom initial ramdisk on ISO storage domain if Linux operating system is used. kernel String Path to custom kernel on ISO storage domain if Linux operating system is used. reported_kernel_cmdline String The host kernel command line as reported by a running host. type String Operating system name in human readable form. version Version 7.206.1. boot Configuration of the boot sequence. Note Not used for hosts. 7.206.2. cmdline Custom kernel parameters for starting the virtual machine if Linux operating system is used. Note Not used for hosts. 7.206.3. custom_kernel_cmdline A custom part of the host kernel command line. This will be merged with the existing kernel command line. You must reinstall and then reboot the host to apply the changes implemented by this attribute. During each host deploy procedure, kernel parameters that were added in the host deploy procedure are removed using grubby --update-kernel DEFAULT --remove-args <previous_custom_params> , and the current kernel command line customization is applied using grubby --update-kernel DEFAULT --args <custom_params> . The Manager internally keeps track of the last-applied kernel parameters customization. Note This attribute is currently only used for hosts. 7.206.4. initrd Path to custom initial ramdisk on ISO storage domain if Linux operating system is used. For example iso://initramfs-3.10.0-514.6.1.el7.x86_64.img . Note Not used for hosts. 7.206.5. kernel Path to custom kernel on ISO storage domain if Linux operating system is used. For example iso://vmlinuz-3.10.0-514.6.1.el7.x86_64 . Note Not used for hosts. 7.206.6. reported_kernel_cmdline The host kernel command line as reported by a running host. This is a read-only attribute. Attempts to change this attribute are silently ignored. Note This attribute is currently only used for hosts. 7.206.7. type Operating system name in human readable form. For example Fedora or RHEL . In general one of the names returned by the operating system service. Note Read only for hosts. 7.207. OperatingSystemInfo struct Represents a guest operating system. Table 7.277. Attributes summary Name Type Summary architecture Architecture Operating system architecture. comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. large_icon Icon Large icon of the guest operating system. name String A human-readable name in plain text. small_icon Icon Small icon of the guest operating system. tpm_support TpmSupport TPM support status. 7.207.1. large_icon Large icon of the guest operating system. Maximum dimensions: width 150px, height 120px. 7.207.2. small_icon Small icon of the guest operating system. Maximum dimensions: width 43px, height 43px. 7.208. Option struct Table 7.278. Attributes summary Name Type Summary name String type String value String 7.209. OsType enum Type representing kind of operating system. Warning This type has been deprecated with the introduction of the OperatingSystemInfo type. Operating systems are available as a top-level collection in the API: operating_systems . The end-user declares the type of the operating system installed in the virtual machine (guest operating system) by selecting one of these values. This declaration enables the system to tune the virtual machine configuration for better user experience. For example, the system chooses devices that are most suitable for the operating system. Note that the system rely on user's selection and does not verify it by inspecting the actual guest operating system installed. Table 7.279. Values summary Name Summary other Other type of operating system, not specified by the other values. other_linux Distribution of Linux other than those specified by the other values. rhel_3 Red Hat Enterprise Linux 3 32-bit. rhel_3x64 Red Hat Enterprise Linux 3 64-bit. rhel_4 Red Hat Enterprise Linux 4 32-bit. rhel_4x64 Red Hat Enterprise Linux 4 64-bit. rhel_5 Red Hat Enterprise Linux 5 32-bit. rhel_5x64 Red Hat Enterprise Linux 5 64-bit. rhel_6 Red Hat Enterprise Linux 6 32-bit. rhel_6x64 Red Hat Enterprise Linux 6 64-bit. unassigned This value is mapped to other . windows_2003 Windows 2003 32-bit. windows_2003x64 Windows 2003 64-bit. windows_2008 Windows 2008 32-bit. windows_2008r2x64 Windows 2008 R2 64-bit. windows_2008x64 Windows 2008 64-bit. windows_2012x64 Windows 2012 64-bit. windows_7 Windows 7 32-bit. windows_7x64 Windows 7 64-bit. windows_8 Windows 8 32-bit. windows_8x64 Windows 8 64-bit. windows_xp Windows XP. 7.210. Package struct Type representing a package. This is an example of the package element: <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> Table 7.280. Attributes summary Name Type Summary name String The name of the package. 7.211. ParallelMigrationsPolicy enum Type representing parallel migration connections policy. Table 7.281. Values summary Name Summary auto Choose automatically between parallel and non-parallel connections. auto_parallel Use parallel connections and select their number automatically. custom Use manually specified number of parallel connections. disabled Use non-parallel connections. inherit Use cluster value (applicable only to VMs). 7.211.1. auto Choose automatically between parallel and non-parallel connections. If parallel connections are used, select their number automatically. 7.211.2. custom Use manually specified number of parallel connections. The number of parallel connections must be set in MigrationOptions.customParallelMigrations. 7.212. Payload struct Table 7.282. Attributes summary Name Type Summary files File[ ] type VmDeviceType volume_id String 7.213. PayloadEncoding enum Table 7.283. Values summary Name Summary base64 plaintext 7.214. Permission struct Type represents a permission. Table 7.284. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.285. Links summary Name Type Summary cluster Cluster Reference to cluster. data_center DataCenter Reference to data center. disk Disk Reference to disk. group Group Reference to group. host Host Reference to host. role Role Reference to role. storage_domain StorageDomain Reference to storage domain. template Template Reference to template. user User Reference to user. vm Vm Reference to virtual machine. vm_pool VmPool Reference to virtual machines pool. 7.215. Permit struct Type represents a permit. Table 7.286. Attributes summary Name Type Summary administrative Boolean Specifies whether permit is administrative or not. comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.287. Links summary Name Type Summary role Role Reference to the role the permit belongs to. 7.216. PmProxy struct Table 7.288. Attributes summary Name Type Summary type PmProxyType 7.217. PmProxyType enum Table 7.289. Values summary Name Summary cluster The fence proxy is selected from the same cluster as the fenced host. dc The fence proxy is selected from the same data center as the fenced host. other_dc The fence proxy is selected from a different data center than the fenced host. 7.218. PolicyUnitType enum Holds the types of all internal policy unit types. Table 7.290. Values summary Name Summary filter load_balancing weight 7.219. PortMirroring struct 7.220. PowerManagement struct Table 7.291. Attributes summary Name Type Summary address String The host name or IP address of the host. agents Agent[ ] Specifies fence agent options when multiple fences are used. automatic_pm_enabled Boolean Toggles the automated power control of the host in order to save energy. enabled Boolean Indicates whether power management configuration is enabled or disabled. kdump_detection Boolean Toggles whether to determine if kdump is running on the host before it is shut down. options Option[ ] Fencing options for the selected type= specified with the option name="" and value="" strings. password String A valid, robust password for power management. pm_proxies PmProxy[ ] Determines the power management proxy. status PowerManagementStatus Determines the power status of the host. type String Fencing device code. username String A valid user name for power management. 7.220.1. agents Specifies fence agent options when multiple fences are used. Use the order sub-element to prioritize the fence agents. Agents are run sequentially according to their order until the fence action succeeds. When two or more fence agents have the same order, they are run concurrently. Other sub-elements include type, ip, user, password, and options. 7.220.2. automatic_pm_enabled Toggles the automated power control of the host in order to save energy. When set to true, the host will be automatically powered down if the cluster's load is low, and powered on again when required. This is set to true when a host is created, unless disabled by the user. 7.220.3. kdump_detection Toggles whether to determine if kdump is running on the host before it is shut down. When set to true , the host will not shut down during a kdump process. This is set to true when a host has power management enabled, unless disabled by the user. 7.220.4. type Fencing device code. A list of valid fencing device codes are available in the capabilities collection. 7.221. PowerManagementStatus enum Table 7.292. Values summary Name Summary off Host is OFF. on Host is ON. unknown Unknown status. 7.222. Product struct Table 7.293. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. 7.223. ProductInfo struct Product information. The entry point contains a product_info element to help an API user determine the legitimacy of the Red Hat Virtualization environment. This includes the name of the product, the vendor and the version . Verify a genuine Red Hat Virtualization environment The follow elements identify a genuine Red Hat Virtualization environment: Table 7.294. Attributes summary Name Type Summary instance_id String The ID of this particular installation of the product. name String The name of the product, for example oVirt Engine . vendor String The name of the vendor, for example `ovirt. version Version The version number of the product. 7.223.1. vendor The name of the vendor, for example ovirt.org . 7.224. ProfileDetail struct Table 7.295. Attributes summary Name Type Summary block_statistics BlockStatistic[ ] duration Integer fop_statistics FopStatistic[ ] profile_type String statistics Statistic[ ] 7.225. Property struct Table 7.296. Attributes summary Name Type Summary name String value String 7.226. ProxyTicket struct Table 7.297. Attributes summary Name Type Summary value String 7.227. QcowVersion enum The QCOW version specifies to the qemu which qemu version the volume supports. This field can be updated using the update API and will be reported only for QCOW volumes, it is determined by the storage domain's version which the disk is created on. Storage domains with version lower than V4 support QCOW2 version 2 volumes, while V4 storage domains also support QCOW2 version 3. For more information about features of the different QCOW versions, see here . Table 7.298. Values summary Name Summary qcow2_v2 The Copy On Write default compatibility version It means that every QEMU can use it. qcow2_v3 The Copy On Write compatibility version which was introduced in QEMU 1. 7.227.1. qcow2_v3 The Copy On Write compatibility version which was introduced in QEMU 1.1 It means that the new format is in use. 7.228. Qos struct This type represents the attributes to define Quality of service (QoS). For storage the type is storage , the attributes max_throughput , max_read_throughput , max_write_throughput , max_iops , max_read_iops and max_write_iops are relevant. For resources with computing capabilities the type is cpu , the attribute cpu_limit is relevant. For virtual machines networks the type is network , the attributes inbound_average , inbound_peak , inbound_burst , outbound_average , outbound_peak and outbound_burst are relevant. For host networks the type is hostnetwork , the attributes outbound_average_linkshare , outbound_average_upperlimit and outbound_average_realtime are relevant. Table 7.299. Attributes summary Name Type Summary comment String Free text containing comments about this object. cpu_limit Integer The maximum processing capability in %. description String A human-readable description in plain text. id String A unique identifier. inbound_average Integer The desired average inbound bit rate in Mbps (Megabits per sec). inbound_burst Integer The amount of data that can be delivered in a single burst, in MB. inbound_peak Integer The maximum inbound rate in Mbps (Megabits per sec). max_iops Integer Maximum permitted number of input and output operations per second. max_read_iops Integer Maximum permitted number of input operations per second. max_read_throughput Integer Maximum permitted throughput for read operations. max_throughput Integer Maximum permitted total throughput. max_write_iops Integer Maximum permitted number of output operations per second. max_write_throughput Integer Maximum permitted throughput for write operations. name String A human-readable name in plain text. outbound_average Integer The desired average outbound bit rate in Mbps (Megabits per sec). outbound_average_linkshare Integer Weighted share. outbound_average_realtime Integer The committed rate in Mbps (Megabits per sec). outbound_average_upperlimit Integer The maximum bandwidth to be used by a network in Mbps (Megabits per sec). outbound_burst Integer The amount of data that can be sent in a single burst, in MB. outbound_peak Integer The maximum outbound rate in Mbps (Megabits per sec). type QosType The kind of resources this entry can be assigned. 7.228.1. cpu_limit The maximum processing capability in %. Used to configure computing resources. 7.228.2. inbound_average The desired average inbound bit rate in Mbps (Megabits per sec). Used to configure virtual machines networks. If defined, inbound_peak and inbound_burst also has to be set. See Libvirt-QOS for further details. 7.228.3. inbound_burst The amount of data that can be delivered in a single burst, in MB. Used to configure virtual machine networks. If defined, inbound_average and inbound_peak must also be set. See Libvirt-QOS for further details. 7.228.4. inbound_peak The maximum inbound rate in Mbps (Megabits per sec). Used to configure virtual machines networks. If defined, inbound_average and inbound_burst also has to be set. See Libvirt-QOS for further details. 7.228.5. max_iops Maximum permitted number of input and output operations per second. Used to configure storage. Must not be set if max_read_iops or max_write_iops is set. 7.228.6. max_read_iops Maximum permitted number of input operations per second. Used to configure storage. Must not be set if max_iops is set. 7.228.7. max_read_throughput Maximum permitted throughput for read operations. Used to configure storage. Must not be set if max_throughput is set. 7.228.8. max_throughput Maximum permitted total throughput. Used to configure storage. Must not be set if max_read_throughput or max_write_throughput is set. 7.228.9. max_write_iops Maximum permitted number of output operations per second. Used to configure storage. Must not be set if max_iops is set. 7.228.10. max_write_throughput Maximum permitted throughput for write operations. Used to configure storage. Must not be set if max_throughput is set. 7.228.11. outbound_average The desired average outbound bit rate in Mbps (Megabits per sec). Used to configure virtual machines networks. If defined, outbound_peak and outbound_burst also has to be set. See Libvirt-QOS for further details. 7.228.12. outbound_average_linkshare Weighted share. Used to configure host networks. Signifies how much of the logical link's capacity a specific network should be allocated, relative to the other networks attached to the same logical link. The exact share depends on the sum of shares of all networks on that link. By default this is a number in the range 1-100. 7.228.13. outbound_average_realtime The committed rate in Mbps (Megabits per sec). Used to configure host networks. The minimum bandwidth required by a network. The committed rate requested is not guaranteed and will vary depending on the network infrastructure and the committed rate requested by other networks on the same logical link. 7.228.14. outbound_average_upperlimit The maximum bandwidth to be used by a network in Mbps (Megabits per sec). Used to configure host networks. If outboundAverageUpperlimit and outbound_average_realtime are provided, the outbound_averageUpperlimit must not be lower than the outbound_average_realtime . See Libvirt-QOS for further details. 7.228.15. outbound_burst The amount of data that can be sent in a single burst, in MB. Used to configure virtual machine networks. If defined, outbound_average and outbound_peak must also be set. See Libvirt-QOS for further details. 7.228.16. outbound_peak The maximum outbound rate in Mbps (Megabits per sec). Used to configure virtual machines networks. If defined, outbound_average and outbound_burst also has to be set. See Libvirt-QOS for further details. Table 7.300. Links summary Name Type Summary data_center DataCenter The data center the QoS is assiciated to. 7.229. QosType enum This type represents the kind of resource the Quality of service (QoS) can be assigned to. Table 7.301. Values summary Name Summary cpu The Quality of service (QoS) can be assigned to resources with computing capabilities. hostnetwork The Quality of service (QoS) can be assigned to host networks. network The Quality of service (QoS) can be assigned to virtual machines networks. storage The Quality of service (QoS) can be assigned to storage. 7.230. Quota struct Represents a quota object. An example XML representation of a quota: <quota href="/ovirt-engine/api/datacenters/7044934e/quotas/dcad5ddc" id="dcad5ddc"> <name>My Quota</name> <description>A quota for my oVirt environment</description> <cluster_hard_limit_pct>0</cluster_hard_limit_pct> <cluster_soft_limit_pct>0</cluster_soft_limit_pct> <data_center href="/ovirt-engine/api/datacenters/7044934e" id="7044934e"/> <storage_hard_limit_pct>0</storage_hard_limit_pct> <storage_soft_limit_pct>0</storage_soft_limit_pct> </quota> Table 7.302. Attributes summary Name Type Summary cluster_hard_limit_pct Integer cluster_soft_limit_pct Integer comment String Free text containing comments about this object. data_center DataCenter description String A human-readable description in plain text. disks Disk[ ] id String A unique identifier. name String A human-readable name in plain text. storage_hard_limit_pct Integer storage_soft_limit_pct Integer users User[ ] vms Vm[ ] Table 7.303. Links summary Name Type Summary permissions Permission[ ] quota_cluster_limits QuotaClusterLimit[ ] quota_storage_limits QuotaStorageLimit[ ] 7.231. QuotaClusterLimit struct Table 7.304. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. memory_limit Decimal memory_usage Decimal name String A human-readable name in plain text. vcpu_limit Integer vcpu_usage Integer Table 7.305. Links summary Name Type Summary cluster Cluster quota Quota 7.232. QuotaModeType enum Table 7.306. Values summary Name Summary audit disabled enabled 7.233. QuotaStorageLimit struct Table 7.307. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. limit Integer name String A human-readable name in plain text. usage Decimal Table 7.308. Links summary Name Type Summary quota Quota storage_domain StorageDomain 7.234. Range struct Table 7.309. Attributes summary Name Type Summary from String to String 7.235. Rate struct Determines maximum speed of consumption of bytes from random number generator device. Table 7.310. Attributes summary Name Type Summary bytes Integer Number of bytes allowed to consume per period. period Integer Duration of one period in milliseconds. 7.236. RegistrationAffinityGroupMapping struct This type describes how to map affinity groups as part of the object registration. An object can be a virtual machine, template, etc. An example of an XML representation using this mapping: <action> <registration_configuration> <affinity_group_mappings> <registration_affinity_group_mapping> <from> <name>affinity</name> </from> <to> <name>affinity2</name> </to> </registration_affinity_group_mapping> </affinity_group_mappings> </registration_configuration> </action> Table 7.311. Links summary Name Type Summary from AffinityGroup Reference to the original affinity group. to AffinityGroup Reference to the destination affinity group. 7.236.1. from Reference to the original affinity group. It can be specified using name . 7.237. RegistrationAffinityLabelMapping struct This type describes how to map affinity labels as part of the object registration. An object can be a virtual machine, template, etc. An example of an XML representation using mapping: <action> <registration_configuration> <affinity_label_mappings> <registration_affinity_label_mapping> <from> <name>affinity_label</name> </from> <to> <name>affinity_label2</name> </to> </registration_affinity_label_mapping> </affinity_label_mappings> </registration_configuration> </action> Table 7.312. Links summary Name Type Summary from AffinityLabel Reference to the original affinity label. to AffinityLabel Reference to the destination affinity label. 7.237.1. from Reference to the original affinity label. It can be specified using name . 7.238. RegistrationClusterMapping struct This type describes how to map clusters as part of the object registration. An object can be a virtual machine, template, etc. An example of an XML representation using this mapping: <action> <registration_configuration> <cluster_mappings> <registration_cluster_mapping> <from> <name>myoriginalcluster</name> </from> <to> <name>mynewcluster</name> </to> </registration_cluster_mapping> </cluster_mappings> </registration_configuration> </action> Table 7.313. Links summary Name Type Summary from Cluster Reference to the original cluster. to Cluster Reference to the destination cluster. 7.238.1. from Reference to the original cluster. It can be specified using the id or the name . 7.238.2. to Reference to the destination cluster. It can be specified using the id or the name . 7.239. RegistrationConfiguration struct This type describes how an object (virtual machine, template, etc) is registered, and is used for the implementation of disaster recovery solutions. Each mapping contained in this type can be used to map objects in the original system to corresponding objects in the system where the virtual machine or template is being registered. For example, there could be a primary setup with a virtual machine configured on cluster A, and an active secondary setup with cluster B. Cluster B is compatible with that virtual machine, and in case of a disaster recovery scenario the storage domain can be imported to the secondary setup, and the user can register the virtual machine to cluster B. In that case, we can automate the recovery process by defining a cluster mapping. After the entity is registered, its OVF will indicate it belongs to cluster A, but the mapping will indicate that cluster A will be replaced with cluster B. Red Hat Virtualization Manager should do the switch and register the virtual machine to cluster B in the secondary site. Cluster mapping is just one example, there are different types of mappings: Cluster mapping. LUN mapping. Role mapping. Domain mapping. Permissions mapping. Affinity Group mapping. Affinity Label mapping. Virtual NIC profile mapping. Each mapping will be used for its specific OVF's data once the register operation takes place in the Red Hat Virtualization Manager. An example of an XML representation using the mapping: <action> <registration_configuration> <cluster_mappings> <registration_cluster_mapping> <from> <name>myoriginalcluster</name> </from> <to> <name>mynewcluster</name> </to> </registration_cluster_mapping> </cluster_mappings> <role_mappings> <registration_role_mapping> <from> <name>SuperUser</name> </from> <to> <name>UserVmRunTimeManager</name> </to> </registration_role_mapping> </role_mappings> <domain_mappings> <registration_domain_mapping> <from> <name>redhat</name> </from> <to> <name>internal</name> </to> </registration_domain_mapping> </domain_mappings> <lun_mappings> <registration_lun_mapping> <from id="111"> </from> <to id="222"> <alias>weTestLun</alias> <lun_storage> <type>iscsi</type> <logical_units> <logical_unit id="36001405fb1ddb4b91e44078f1fffcfef"> <address>44.33.11.22</address> <port>3260</port> <portal>1</portal> <target>iqn.2017-11.com.name.redhat:444</target> </logical_unit> </logical_units> </lun_storage> </to> </registration_lun_mapping> </lun_mappings> <affinity_group_mappings> <registration_affinity_group_mapping> <from> <name>affinity</name> </from> <to> <name>affinity2</name> </to> </registration_affinity_group_mapping> </affinity_group_mappings> <affinity_label_mappings> <registration_affinity_label_mapping> <from> <name>affinity_label</name> </from> <to> <name>affinity_label2</name> </to> </registration_affinity_label_mapping> </affinity_label_mappings> <vnic_profile_mappings> <registration_vnic_profile_mapping> <from> <name>gold</name> <network> <name>red</name> </network> </from> <to id="738dd914-8ec8-4a8b-8628-34672a5d449b"/> </registration_vnic_profile_mapping> <registration_vnic_profile_mapping> <from> <name>silver</name> <network> <name>blue</name> </network> </from> <to> <name>copper</name> <network> <name>orange</name> </network> </to> </registration_vnic_profile_mapping> </vnic_profile_mappings> </registration_configuration> </action> Table 7.314. Attributes summary Name Type Summary affinity_group_mappings RegistrationAffinityGroupMapping[ ] Describes how the affinity groups are mapped. affinity_label_mappings RegistrationAffinityLabelMapping[ ] Describes how the affinity labels are mapped. cluster_mappings RegistrationClusterMapping[ ] Describes how the clusters that the object references are mapped. domain_mappings RegistrationDomainMapping[ ] Describes how the users' domains are mapped. lun_mappings RegistrationLunMapping[ ] Describes how the LUNs are mapped. role_mappings RegistrationRoleMapping[ ] Describes how the roles are mapped. vnic_profile_mappings RegistrationVnicProfileMapping[ ] Mapping rules for virtual NIC profiles that will be applied during the register process. 7.240. RegistrationDomainMapping struct This type describes how to map the users' domain as part of the object registration. An object can be a virtual machine, template, etc. NOTE: This is based on the assumption that user names will be the same, and that only the domain name will be changed. An example of an XML representation using this mapping: <action> <registration_configuration> <domain_mappings> <registration_domain_mapping> <from> <name>redhat</name> </from> <to> <name>internal</name> </to> </registration_domain_mapping> </domain_mappings> </registration_configuration> </action> Table 7.315. Links summary Name Type Summary from Domain Reference to the original domain. to Domain Reference to the destination domain. 7.240.1. from Reference to the original domain. It can be specified using name . 7.241. RegistrationLunMapping struct This type describes how to map LUNs as part of the object registration. An object can be a virtual machine, template, etc. An external LUN disk is an entity which does not reside on a storage domain. It must be specified because it doesn't need to exist in the environment where the object is registered. An example of an XML representation using this mapping: <action> <registration_configuration> <lun_mappings> <registration_lun_mapping> <lun_mappings> <registration_lun_mapping> <from id="111"> </from> <to id="222"> <alias>weTestLun</alias> <lun_storage> <type>iscsi</type> <logical_units> <logical_unit id="36001405fb1ddb4b91e44078f1fffcfef"> <address>44.33.11.22</address> <port>3260</port> <portal>1</portal> <target>iqn.2017-11.com.name.redhat:444</target> </logical_unit> </logical_units> </lun_storage> </to> </registration_lun_mapping> </lun_mappings> </registration_configuration> </action> Table 7.316. Links summary Name Type Summary from Disk Reference to the original LUN. to Disk Reference to the LUN which is to be added to the virtual machine. 7.241.1. from Reference to the original LUN. This must be specified using the id attribute. 7.242. RegistrationRoleMapping struct This type describes how to map roles as part of the object registration. An object can be a virtual machine, template, etc. A role mapping is intended to map correlating roles between the primary site and the secondary site. For example, there may be permissions with role UserVmRunTimeManager for the virtual machine that is being registered. Therefore we can send a mapping that will register the virtual machine in the secondary setup using the SuperUser role instead of UserVmRunTimeManager An example of an XML representation using this mapping: <action> <registration_configuration> <role_mappings> <registration_eole_mapping> <from> <name>SuperUser</name> </from> <to> <name>UserVmRunTimeManager</name> </to> </registration_role_mapping> </role_mappings> </registration_configuration> </action> Table 7.317. Links summary Name Type Summary from Role Reference to the original role. to Role Reference to the destination role. 7.242.1. from Reference to the original role. It can be specified using name . 7.243. RegistrationVnicProfileMapping struct Maps an external virtual NIC profile to one that exists in the Red Hat Virtualization Manager. The target may be specified as a profile ID or a pair of profile name and network name. If, for example, the desired virtual NIC profile mapping includes the following lines: Source network name Source network profile name Target virtual NIC profile ID\names red gold 738dd914-8ec8-4a8b-8628-34672a5d449b <empty> (no network name) <empty> (no network profile name) 892a12ec-2028-4451-80aa-ff3bf55d6bac blue silver orange\copper yellow platinum <empty> (no profile) green bronze Then the following snippet should be added to RegistrationConfiguration <vnic_profile_mappings> <registration_vnic_profile_mapping> <from> <name>gold</name> <network> <name>red</name> </network> </from> <to id="738dd914-8ec8-4a8b-8628-34672a5d449b"/> </registration_vnic_profile_mapping> <registration_vnic_profile_mapping> <from> <name></name> <network> <name></name> </network> </from> <to id="892a12ec-2028-4451-80aa-ff3bf55d6bac"/> </registration_vnic_profile_mapping> <registration_vnic_profile_mapping> <from> <name>silver</name> <network> <name>blue</name> </network> </from> <to> <name>copper</name> <network> <name>orange</name> </network> </to> </registration_vnic_profile_mapping> <registration_vnic_profile_mapping> <from> <name>platinum</name> <network> <name>yellow</name> </network> </from> <to> <name></name> <network> <name></name> </network> </to> </registration_vnic_profile_mapping> <registration_vnic_profile_mapping> <from> <name>bronze</name> <network> <name>green</name> </network> </from> </registration_vnic_profile_mapping> </vnic_profile_mappings> Table 7.318. Links summary Name Type Summary from VnicProfile References to the external network and the external network profile. to VnicProfile Reference to to an existing virtual NIC profile. 7.243.1. from References to the external network and the external network profile. Both should be specified using their name . 7.243.2. to Reference to to an existing virtual NIC profile. It should be specified using its name or id . Either name or id should be specified but not both. 7.244. ReportedConfiguration struct Table 7.319. Attributes summary Name Type Summary actual_value String expected_value String in_sync Boolean false when the network attachment contains uncommitted network configuration. name String 7.245. ReportedDevice struct Table 7.320. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. ips Ip[ ] mac Mac name String A human-readable name in plain text. type ReportedDeviceType Table 7.321. Links summary Name Type Summary vm Vm 7.246. ReportedDeviceType enum Table 7.322. Values summary Name Summary network 7.247. ResolutionType enum Table 7.323. Values summary Name Summary add copy 7.248. RngDevice struct Random number generator (RNG) device model. Table 7.324. Attributes summary Name Type Summary rate Rate Determines maximum speed of consumption of bytes from random number generator device. source RngSource Backend of the random number generator device. 7.249. RngSource enum Representing the random generator backend types. Table 7.325. Values summary Name Summary hwrng Obtains random data from the /dev/hwrng (usually specialized HW generator) device. random Obtains random data from the /dev/random device. urandom Obtains random data from the /dev/urandom device. 7.249.1. urandom Obtains random data from the /dev/urandom device. This RNG source is meant to replace random RNG source for non-cluster-aware entities (i.e. Blank template and instance types) and entities associated with clusters with compatibility version 4.1 or higher. 7.250. Role struct Represents a system role. Table 7.326. Attributes summary Name Type Summary administrative Boolean Defines the role as administrative-only or not. comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. mutable Boolean Defines the ability to update or delete the role. name String A human-readable name in plain text. 7.250.1. mutable Defines the ability to update or delete the role. Roles with mutable set to false are predefined roles. Table 7.327. Links summary Name Type Summary permits Permit[ ] A link to the permits sub-collection for role permits. user User 7.251. RoleType enum Type representing whether a role is administrative or not. A user which was granted at least one administrative role is considered an administrator. Table 7.328. Values summary Name Summary admin Administrative role. user User role. 7.252. SchedulingPolicy struct Table 7.329. Attributes summary Name Type Summary comment String Free text containing comments about this object. default_policy Boolean description String A human-readable description in plain text. id String A unique identifier. locked Boolean name String A human-readable name in plain text. properties Property[ ] Table 7.330. Links summary Name Type Summary balances Balance[ ] filters Filter[ ] weight Weight[ ] 7.253. SchedulingPolicyUnit struct Table 7.331. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. enabled Boolean id String A unique identifier. internal Boolean name String A human-readable name in plain text. properties Property[ ] type PolicyUnitType 7.254. ScsiGenericIO enum When a direct LUN disk is using SCSI passthrough the privileged I/O policy is determined by this enum. Table 7.332. Values summary Name Summary disabled Disable SCSI passthrough. filtered Disallow privileged SCSI I/O. unfiltered Allow privileged SCSI I/O. 7.255. SeLinux struct Represents SELinux in the system. Table 7.333. Attributes summary Name Type Summary mode SeLinuxMode SELinux current mode. 7.256. SeLinuxMode enum Represents an SELinux enforcement mode. Table 7.334. Values summary Name Summary disabled SELinux is disabled in the kernel. enforcing SELinux is running and enforcing permissions. permissive SELinux is running and logging but not enforcing permissions. 7.257. SerialNumber struct Table 7.335. Attributes summary Name Type Summary policy SerialNumberPolicy value String 7.258. SerialNumberPolicy enum Type representing the policy of a Serial Number. Table 7.336. Values summary Name Summary custom This policy allows the user to provide an arbitrary string as the Serial Number. host This policy is the legacy policy. none This policy is used to remove the Serial Number Policy, moving it to default: null. vm This policy will use the Virtual Machine ID as the Serial Number. 7.258.1. host This policy is the legacy policy. It will use the Host ID as the Serial Number. 7.259. Session struct Describes a user session to a virtual machine. Table 7.337. Attributes summary Name Type Summary comment String Free text containing comments about this object. console_user Boolean Indicates if this is a console session. description String A human-readable description in plain text. id String A unique identifier. ip Ip The IP address the user is connected from. name String A human-readable name in plain text. protocol String The protocol used by the session. 7.259.1. console_user Indicates if this is a console session. The value will be true for console users (SPICE or VNC), and false for others (such as RDP or SSH). 7.259.2. ip The IP address the user is connected from. Currently only available for console users. 7.259.3. protocol The protocol used by the session. Currently not used. Intended for info about how the user is connected: through SPICE, VNC, SSH, or RDP. Table 7.338. Links summary Name Type Summary user User The user related to this session. vm Vm A link to the virtual machine related to this session. 7.259.4. user The user related to this session. If the user is a console user, this is a link to the real Red Hat Virtualization user. Otherwise, only the user name is provided. 7.260. SkipIfConnectivityBroken struct Table 7.339. Attributes summary Name Type Summary enabled Boolean If enabled, we will not fence a host in case more than a configurable percentage of hosts in the cluster lost connectivity as well. threshold Integer Threshold for connectivity testing. 7.260.1. enabled If enabled, we will not fence a host in case more than a configurable percentage of hosts in the cluster lost connectivity as well. This comes to prevent fencing storm in cases where there is a global networking issue in the cluster. 7.260.2. threshold Threshold for connectivity testing. If at least the threshold percentage of hosts in the cluster lost connectivity then fencing will not take place. 7.261. SkipIfSdActive struct This type represents the storage related configuration in the fencing policy. Table 7.340. Attributes summary Name Type Summary enabled Boolean If enabled, we will skip fencing in case the host maintains its lease in the storage. 7.261.1. enabled If enabled, we will skip fencing in case the host maintains its lease in the storage. It means that if the host still has storage access then it won't get fenced. 7.262. Snapshot struct Represents a snapshot object. Example XML representation: <snapshot id="456" href="/ovirt-engine/api/vms/123/snapshots/456"> <actions> <link rel="restore" href="/ovirt-engine/api/vms/123/snapshots/456/restore"/> </actions> <vm id="123" href="/ovirt-engine/api/vms/123"/> <description>Virtual Machine 1 - Snapshot A</description> <type>active</type> <date>2010-08-16T14:24:29</date> <persist_memorystate>false</persist_memorystate> </snapshot> Table 7.341. Attributes summary Name Type Summary auto_pinning_policy AutoPinningPolicy Specifies if and how the auto CPU and NUMA configuration is applied. bios Bios Reference to virtual machine's BIOS configuration. comment String Free text containing comments about this object. console Console Console configured for this virtual machine. cpu Cpu The configuration of the virtual machine CPU. cpu_pinning_policy CpuPinningPolicy Specifies if and how the CPU and NUMA configuration is applied. cpu_shares Integer creation_time Date The virtual machine creation date. custom_compatibility_version Version Virtual machine custom compatibility version. custom_cpu_model String custom_emulated_machine String custom_properties CustomProperty[ ] Properties sent to VDSM to configure various hooks. date Date The date when this snapshot has been created. delete_protected Boolean If true , the virtual machine cannot be deleted. description String A human-readable description in plain text. display Display The virtual machine display configuration. domain Domain Domain configured for this virtual machine. fqdn String Fully qualified domain name of the virtual machine. guest_operating_system GuestOperatingSystem What operating system is installed on the virtual machine. guest_time_zone TimeZone What time zone is used by the virtual machine (as returned by guest agent). has_illegal_images Boolean Indicates whether the virtual machine has snapshots with disks in ILLEGAL state. high_availability HighAvailability The virtual machine high availability configuration. id String A unique identifier. initialization Initialization Reference to the virtual machine's initialization configuration. io Io For performance tuning of IO threading. large_icon Icon Virtual machine's large icon. lease StorageDomainLease Reference to the storage domain this virtual machine/template lease reside on. memory Integer The virtual machine's memory, in bytes. memory_policy MemoryPolicy Reference to virtual machine's memory management configuration. migration MigrationOptions Reference to configuration of migration of a running virtual machine to another host. migration_downtime Integer Maximum time the virtual machine can be non responsive during its live migration to another host in ms. multi_queues_enabled Boolean If true , each virtual interface will get the optimal number of queues, depending on the available virtual Cpus. name String A human-readable name in plain text. next_run_configuration_exists Boolean Virtual machine configuration has been changed and requires restart of the virtual machine. numa_tune_mode NumaTuneMode How the NUMA topology is applied. origin String The origin of this virtual machine. os OperatingSystem Operating system type installed on the virtual machine. payloads Payload[ ] Optional payloads of the virtual machine, used for ISOs to configure it. persist_memorystate Boolean Indicates if the content of the memory of the virtual machine is included in the snapshot. placement_policy VmPlacementPolicy The configuration of the virtual machine's placement policy. rng_device RngDevice Random Number Generator device configuration for this virtual machine. run_once Boolean If true , the virtual machine has been started using the run once command, meaning it's configuration might differ from the stored one for the purpose of this single run. serial_number SerialNumber Virtual machine's serial number in a cluster. small_icon Icon Virtual machine's small icon. snapshot_status SnapshotStatus Status of the snapshot. snapshot_type SnapshotType Type of the snapshot. soundcard_enabled Boolean If true , the sound card is added to the virtual machine. sso Sso Reference to the Single Sign On configuration this virtual machine is configured for. start_paused Boolean If true , the virtual machine will be initially in 'paused' state after start. start_time Date The date in which the virtual machine was started. stateless Boolean If true , the virtual machine is stateless - it's state (disks) are rolled-back after shutdown. status VmStatus The current status of the virtual machine. status_detail String Human readable detail of current status. stop_reason String The reason the virtual machine was stopped. stop_time Date The date in which the virtual machine was stopped. storage_error_resume_behaviour VmStorageErrorResumeBehaviour Determines how the virtual machine will be resumed after storage error. time_zone TimeZone The virtual machine's time zone set by oVirt. tpm_enabled Boolean If true , a TPM device is added to the virtual machine. tunnel_migration Boolean If true , the network data transfer will be encrypted during virtual machine live migration. type VmType Determines whether the virtual machine is optimized for desktop or server. usb Usb Configuration of USB devices for this virtual machine (count, type). use_latest_template_version Boolean If true , the virtual machine is reconfigured to the latest version of it's template when it is started. virtio_scsi VirtioScsi Reference to VirtIO SCSI configuration. virtio_scsi_multi_queues Integer Number of queues for a Virtio-SCSI contoller this field requires virtioScsiMultiQueuesEnabled to be true see virtioScsiMultiQueuesEnabled for more info virtio_scsi_multi_queues_enabled Boolean If true , the Virtio-SCSI devices will obtain a number of multiple queues depending on the available virtual Cpus and disks, or according to the specified virtioScsiMultiQueues. 7.262.1. auto_pinning_policy Specifies if and how the auto CPU and NUMA configuration is applied. Important Since version 4.5 of the engine this operation is deprecated, and preserved only for backwards compatibility. It might be removed in the future. Please use CpuPinningPolicy instead. 7.262.2. cpu The configuration of the virtual machine CPU. The socket configuration can be updated without rebooting the virtual machine. The cores and the threads require a reboot. For example, to change the number of sockets to 4 immediately, and the number of cores and threads to 2 after reboot, send the following request: With a request body: <vm> <cpu> <topology> <sockets>4</sockets> <cores>2</cores> <threads>2</threads> </topology> </cpu> </vm> 7.262.3. cpu_pinning_policy Specifies if and how the CPU and NUMA configuration is applied. When not specified the behavior of CPU pinning string will determine CpuPinningPolicy to None or Manual. 7.262.4. custom_compatibility_version Virtual machine custom compatibility version. Enables a virtual machine to be customized to its own compatibility version. If custom_compatibility_version is set, it overrides the cluster's compatibility version for this particular virtual machine. The compatibility version of a virtual machine is limited by the data center the virtual machine resides in, and is checked against capabilities of the host the virtual machine is planned to run on. 7.262.5. high_availability The virtual machine high availability configuration. If set, the virtual machine will be automatically restarted when it unexpectedly goes down. 7.262.6. initialization Reference to the virtual machine's initialization configuration. Note Since Red Hat Virtualization 4.1.8 this property can be cleared by sending an empty tag. For example, to clear the initialization attribute send a request like this: With a request body like this: <vm> <initialization/> </vm> The response to such a request, and requests with the header All-Content: true will still contain this attribute. 7.262.7. large_icon Virtual machine's large icon. Either set by user or refers to image set according to operating system. 7.262.8. lease Reference to the storage domain this virtual machine/template lease reside on. A virtual machine running with a lease requires checking while running that the lease is not taken by another host, preventing another instance of this virtual machine from running on another host. This provides protection against split-brain in highly available virtual machines. A template can also have a storage domain defined for a lease in order to have the virtual machines created from this template to be preconfigured with this storage domain as the location of the leases. 7.262.9. memory The virtual machine's memory, in bytes. For example, to update a virtual machine to contain 1 Gibibyte (GiB) of memory, send the following request: With the following request body: <vm> <memory>1073741824</memory> </vm> Memory hot plug is supported from Red Hat Virtualization 3.6 onwards. You can use the example above to increase memory while the virtual machine is in state up . The size increment must be dividable by the value of the HotPlugMemoryBlockSizeMb configuration value (256 MiB by default). If the memory size increment is not dividable by this value, the memory size change is only stored to run configuration. Each successful memory hot plug operation creates one or two new memory devices. Memory hot unplug is supported since Red Hat Virtualization 4.2 onwards. Memory hot unplug can only be performed when the virtual machine is in state up . Only previously hot plugged memory devices can be removed by the hot unplug operation. The requested memory decrement is rounded down to match sizes of a combination of previously hot plugged memory devices. The requested memory value is stored to run configuration without rounding. Note Memory in the example is converted to bytes using the following formula: 1 GiB = 2 30 bytes = 1073741824 bytes. Note Red Hat Virtualization Manager internally rounds values down to whole MiBs (1MiB = 2 20 bytes) 7.262.10. migration Reference to configuration of migration of a running virtual machine to another host. Note API for querying migration policy by ID returned by this method is not implemented yet. Use /ovirt-engine/api/options/MigrationPolicies to get a list of all migration policies with their IDs. 7.262.11. migration_downtime Maximum time the virtual machine can be non responsive during its live migration to another host in ms. Set either explicitly for the virtual machine or by engine-config -s DefaultMaximumMigrationDowntime=[value] 7.262.12. next_run_configuration_exists Virtual machine configuration has been changed and requires restart of the virtual machine. Changed configuration is applied at processing the virtual machine's shut down . 7.262.13. numa_tune_mode How the NUMA topology is applied. Deprecated in favor of NUMA tune per vNUMA node. 7.262.14. origin The origin of this virtual machine. Possible values: ovirt rhev vmware xen external hosted_engine managed_hosted_engine kvm physical_machine hyperv 7.262.15. persist_memorystate Indicates if the content of the memory of the virtual machine is included in the snapshot. When a snapshot is created the default value is true . 7.262.16. placement_policy The configuration of the virtual machine's placement policy. This configuration can be updated to pin a virtual machine to one or more hosts. Note Virtual machines that are pinned to multiple hosts cannot be live migrated, but in the event of a host failure, any virtual machine configured to be highly available is automatically restarted on one of the other hosts to which the virtual machine is pinned. For example, to pin a virtual machine to two hosts, send the following request: With a request body like this: <vm> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <placement_policy> <hosts> <host> <name>Host1</name> </host> <host> <name>Host2</name> </host> </hosts> <affinity>pinned</affinity> </placement_policy> </vm> 7.262.17. small_icon Virtual machine's small icon. Either set by user or refers to image set according to operating system. 7.262.18. sso Reference to the Single Sign On configuration this virtual machine is configured for. The user can be automatically signed in the virtual machine's operating system when console is opened. 7.262.19. stop_reason The reason the virtual machine was stopped. Optionally set by user when shutting down the virtual machine. 7.262.20. tpm_enabled If true , a TPM device is added to the virtual machine. By default the value is false . This property is only visible when fetching if "All-Content=true" header is set. Table 7.342. Links summary Name Type Summary affinity_labels AffinityLabel[ ] Optional. applications Application[ ] List of applications installed on the virtual machine. cdroms Cdrom[ ] Reference to the ISO mounted to the CDROM. cluster Cluster Reference to cluster the virtual machine belongs to. cpu_profile CpuProfile Reference to CPU profile used by this virtual machine. disk_attachments DiskAttachment[ ] References the disks attached to the virtual machine. disks Disk[ ] List of disks linked to the snapshot. dynamic_cpu DynamicCpu The dynamic configuration of the virtual machine CPU. external_host_provider ExternalHostProvider floppies Floppy[ ] Reference to the ISO mounted to the floppy. graphics_consoles GraphicsConsole[ ] List of graphics consoles configured for this virtual machine. host Host Reference to the host the virtual machine is running on. host_devices HostDevice[ ] References devices associated to this virtual machine. instance_type InstanceType The virtual machine configuration can be optionally predefined via one of the instance types. katello_errata KatelloErratum[ ] Lists all the Katello errata assigned to the virtual machine. mediated_devices VmMediatedDevice[ ] Mediated devices configuration. nics Nic[ ] References the list of network interface devices on the virtual machine. numa_nodes NumaNode[ ] Refers to the NUMA Nodes configuration used by this virtual machine. original_template Template References the original template used to create the virtual machine. permissions Permission[ ] Permissions set for this virtual machine. quota Quota Reference to quota configuration set for this virtual machine. reported_devices ReportedDevice[ ] sessions Session[ ] List of user sessions opened for this virtual machine. snapshots Snapshot[ ] Refers to all snapshots taken from the virtual machine. statistics Statistic[ ] Statistics data collected from this virtual machine. storage_domain StorageDomain Reference to storage domain the virtual machine belongs to. tags Tag[ ] template Template Reference to the template the virtual machine is based on. vm Vm The virtual machine this snapshot has been taken for. vm_pool VmPool Reference to the pool the virtual machine is optionally member of. watchdogs Watchdog[ ] Refers to the Watchdog configuration. 7.262.21. affinity_labels Optional. Used for labeling of sub-clusters. 7.262.22. katello_errata Lists all the Katello errata assigned to the virtual machine. You will receive response in XML like this one: <katello_errata> <katello_erratum href="/ovirt-engine/api/katelloerrata/456" id="456"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> ... </packages> </katello_erratum> ... </katello_errata> 7.262.23. original_template References the original template used to create the virtual machine. If the virtual machine is cloned from a template or another virtual machine, the template links to the Blank template, and the original_template is used to track history. Otherwise the template and original_template are the same. 7.262.24. statistics Statistics data collected from this virtual machine. Note that some statistics, notably memory.buffered and memory.cached are available only when Red Hat Virtualization guest agent is installed in the virtual machine. 7.263. SnapshotStatus enum Represents the current status of the snapshot. Table 7.343. Values summary Name Summary in_preview The snapshot is being previewed. locked The snapshot is locked. ok The snapshot is OK. 7.263.1. locked The snapshot is locked. The snapshot is locked when it is in process of being created, deleted, restored or previewed. 7.264. SnapshotType enum Represents the type of the snapshot. Table 7.344. Values summary Name Summary active Reference to the current configuration of the virtual machines. preview The active snapshot will become preview if some snapshot is being previewed. regular Snapshot created by user. stateless Snapshot created internally for stateless virtual machines. 7.264.1. preview The active snapshot will become preview if some snapshot is being previewed. In other words, this is the active snapshot before preview. 7.264.2. stateless Snapshot created internally for stateless virtual machines. This snapshot is created when the virtual machine is started and it is restored when the virtual machine is shut down. 7.265. SpecialObjects struct This type contains references to special objects, such as blank templates and the root of a hierarchy of tags. Table 7.345. Links summary Name Type Summary blank_template Template A reference to a blank template. root_tag Tag A reference to the root of a hierarchy of tags. 7.266. Spm struct Table 7.346. Attributes summary Name Type Summary priority Integer status SpmStatus 7.267. SpmStatus enum Table 7.347. Values summary Name Summary contending none spm 7.268. Ssh struct Table 7.348. Attributes summary Name Type Summary authentication_method SshAuthenticationMethod comment String Free text containing comments about this object. description String A human-readable description in plain text. fingerprint String Fingerprint of SSH public key for a host. id String A unique identifier. name String A human-readable name in plain text. port Integer public_key String SSH public key of the host using SSH public key format as defined in link:https://tools. user User 7.268.1. fingerprint Fingerprint of SSH public key for a host. This field is deprecated since 4.4.5 and will be removed in the future. Please use publicKey instead. 7.268.2. public_key SSH public key of the host using SSH public key format as defined in RFC4253 . 7.269. SshAuthenticationMethod enum Table 7.349. Values summary Name Summary password publickey 7.270. SshPublicKey struct Table 7.350. Attributes summary Name Type Summary comment String Free text containing comments about this object. content String Contains a saved SSH key. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.351. Links summary Name Type Summary user User 7.271. Sso struct Table 7.352. Attributes summary Name Type Summary methods Method[ ] 7.272. SsoMethod enum Table 7.353. Values summary Name Summary guest_agent 7.273. Statistic struct A generic type used for all kinds of statistics. Statistic contains the statistics values for various entities. The following object contain statistics: Disk Host HostNic NumaNode Nic Vm GlusterBrick Step GlusterVolume An example of a XML representation: <statistics> <statistic id="1234" href="/ovirt-engine/api/hosts/1234/nics/1234/statistics/1234"> <name>data.current.rx</name> <description>Receive data rate</description> <values type="DECIMAL"> <value> <datum>0</datum> </value> </values> <type>GAUGE</type> <unit>BYTES_PER_SECOND</unit> <host_nic id="1234" href="/ovirt-engine/api/hosts/1234/nics/1234"/> </statistic> ... </statistics> Note This statistics sub-collection is read-only. Table 7.354. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. kind StatisticKind The type of statistic measures. name String A human-readable name in plain text. type ValueType The data type for the statistical values that follow. unit StatisticUnit The unit or rate to measure of the statistical values. values Value[ ] A data set that contains datum . Table 7.355. Links summary Name Type Summary brick GlusterBrick disk Disk A relationship to the containing disk resource. gluster_volume GlusterVolume host Host host_nic HostNic A reference to the host NIC. host_numa_node NumaNode nic Nic step Step vm Vm 7.274. StatisticKind enum Table 7.356. Values summary Name Summary counter gauge 7.275. StatisticUnit enum Table 7.357. Values summary Name Summary bits_per_second bytes bytes_per_second count_per_second none percent seconds 7.276. Step struct Represents a step, which is part of job execution. Step is used to describe and track a specific execution unit which is part of a wider sequence. Some steps support reporting their progress. Table 7.358. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. end_time Date The end time of the step. external Boolean Indicates if the step is originated by an external system. external_type ExternalSystemType The external system which is referenced by the step. id String A unique identifier. name String A human-readable name in plain text. number Integer The order of the step in current hierarchy level. progress Integer The step progress (if reported) in percentages. start_time Date The start time of the step. status StepStatus The status of the step. type StepEnum The type of the step. 7.276.1. external Indicates if the step is originated by an external system. External steps are managed externally, by the creator of the step. Table 7.359. Links summary Name Type Summary execution_host Host The host used for the step execution (optional). job Job References the job which is the top of the current step hierarchy. parent_step Step References the parent step of the current step in the hierarchy. statistics Statistic[ ] 7.277. StepEnum enum Type representing a step type. Table 7.360. Values summary Name Summary executing The executing step type. finalizing The finalizing step type. rebalancing_volume The rebalancing volume step type. removing_bricks The removing bricks step type. unknown The unknown step type. validating The validation step type. 7.277.1. executing The executing step type. Used to track the main execution block of the job. Usually it will be a parent step of several sub-steps which describe portions of the execution step. 7.277.2. finalizing The finalizing step type. Describes the post-execution steps requires to complete the job . 7.277.3. rebalancing_volume The rebalancing volume step type. Describes a step type which is part of Gluster flow. 7.277.4. removing_bricks The removing bricks step type. Describes a step type which is part of Gluster flow. 7.277.5. unknown The unknown step type. Describes a step type which its origin is unknown. 7.277.6. validating The validation step type. Used to verify the correctness of parameters and the validity of the parameters prior to the execution. 7.278. StepStatus enum Represents the status of the step. Table 7.361. Values summary Name Summary aborted The aborted step status. failed The failed step status. finished The finished step status. started The started step status. unknown The unknown step status. 7.278.1. aborted The aborted step status. This status is applicable for an external step that was forcibly aborted. 7.278.2. finished The finished step status. This status describes a completed step execution. 7.278.3. started The started step status. This status represents a step which is currently being executed. 7.278.4. unknown The unknown step status. This status represents steps which their resolution is not known, i.e. steps that were executed before the system was unexpectedly restarted. 7.279. StorageConnection struct Represents a storage server connection. Example XML representation: <storage_connection id="123"> <address>mynfs.example.com</address> <type>nfs</type> <path>/exports/mydata</path> </storage_connection> Table 7.362. Attributes summary Name Type Summary address String A storage server connection's address. comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. mount_options String The mount options of an NFS storage server connection. name String A human-readable name in plain text. nfs_retrans Integer The NFS retrans value of an NFS storage server connection. nfs_timeo Integer The NFS timeo value of an NFS storage server connection. nfs_version NfsVersion The NFS version of an NFS storage server connection. password String The password of an iSCSI storage server connection. path String The path of an NFS storage server connection. port Integer The port of an iSCSI storage server connection. portal String The portal of an iSCSI storage server connection. target String The target of an iSCSI storage server connection. type StorageType A storage server connection's type. username String The user name of an iSCSI storage server connection. vfs_type String The VFS type of an NFS storage server connection. Table 7.363. Links summary Name Type Summary gluster_volume GlusterVolume Link to the gluster volume, used by that storage domain. host Host 7.280. StorageConnectionExtension struct Table 7.364. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. password String target String username String Table 7.365. Links summary Name Type Summary host Host 7.281. StorageDomain struct Storage domain. An XML representation of a NFS storage domain with identifier 123 : <storage_domain href="/ovirt-engine/api/storagedomains/123" id="123"> <name>mydata</name> <description>My data</description> <available>38654705664</available> <committed>1073741824</committed> <critical_space_action_blocker>5</critical_space_action_blocker> <external_status>ok</external_status> <master>true</master> <storage> <address>mynfs.example.com</address> <nfs_version>v3</nfs_version> <path>/exports/mydata</path> <type>nfs</type> </storage> <storage_format>v3</storage_format> <type>data</type> <used>13958643712</used> <warning_low_space_indicator>10</warning_low_space_indicator> <wipe_after_delete>false</wipe_after_delete> <data_centers> <data_center href="/ovirt-engine/api/datacenters/456" id="456"/> </data_centers> </storage_domain> Table 7.366. Attributes summary Name Type Summary available Integer backup Boolean This attribute indicates whether a data storage domain is used as backup domain or not. block_size Integer Specifies block size in bytes for a storage domain. comment String Free text containing comments about this object. committed Integer critical_space_action_blocker Integer description String A human-readable description in plain text. discard_after_delete Boolean Indicates whether disk s' blocks on block storage domain s will be discarded right before they are deleted. external_status ExternalStatus id String A unique identifier. import Boolean master Boolean name String A human-readable name in plain text. status StorageDomainStatus storage HostStorage storage_format StorageFormat supports_discard Boolean Indicates whether a block storage domain supports discard operations. supports_discard_zeroes_data Boolean Indicates whether a block storage domain supports the property that discard zeroes the data. type StorageDomainType used Integer warning_low_space_indicator Integer wipe_after_delete Boolean Serves as the default value of wipe_after_delete for disk s on this storage domain . 7.281.1. backup This attribute indicates whether a data storage domain is used as backup domain or not. If the domain is set to backup then it will be used to store virtual machines and templates for disaster recovery purposes in the same way we use export storage domain. This attribute is only available with data storage domain and not with ISO domain or export storage domain. User can use this functionality while creating a data storage domain or importing a data storage domain. 7.281.2. block_size Specifies block size in bytes for a storage domain. Can be omitted and in that case will be defaulted to 512 bytes. Not all storage domains support all possible sizes. 7.281.3. discard_after_delete Indicates whether disk s' blocks on block storage domain s will be discarded right before they are deleted. If true, and a disk on this storage domain has its wipe_after_delete value enabled, then when the disk is deleted: It is first wiped. Then its blocks are discarded. Finally it is deleted. Note that: Discard after delete will always be false for non block storage types. Discard after delete can be set to true only if the storage domain supports discard . 7.281.4. supports_discard Indicates whether a block storage domain supports discard operations. A storage domain only supports discard if all of the logical unit s that it is built from support discard; that is, if each logical unit's discard_max_size value is greater than 0. This is one of the conditions necessary for a virtual disk in this storage domain to have its pass_discard attribute enabled. 7.281.5. supports_discard_zeroes_data Indicates whether a block storage domain supports the property that discard zeroes the data. A storage domain only supports the property that discard zeroes the data if all of the logical unit s that it is built from support it; that is, if each logical unit's discard_zeroes_data value is true. Important Since version 4.2.1 of the system, the support for this attribute has been removed as the sysfs file, discard_zeroes_data , was deprecated in the kernel. It is preserved for backwards compatibility, but the value will always be false . 7.281.6. wipe_after_delete Serves as the default value of wipe_after_delete for disk s on this storage domain . That is, newly created disks will get their wipe_after_delete value from their storage domains by default. Note that the configuration value SANWipeAfterDelete serves as the default value of block storage domains' wipe_after_delete value. Table 7.367. Links summary Name Type Summary data_center DataCenter A link to the data center that the storage domain is attached to. data_centers DataCenter[ ] A set of links to the data centers that the storage domain is attached to. disk_profiles DiskProfile[ ] disk_snapshots DiskSnapshot[ ] disks Disk[ ] files File[ ] host Host Host is only relevant at creation time. images Image[ ] permissions Permission[ ] storage_connections StorageConnection[ ] templates Template[ ] vms Vm[ ] 7.281.7. data_center A link to the data center that the storage domain is attached to. This is preserved for backwards compatibility only, as the storage domain may be attached to multiple data centers (if it is an ISO domain). Use the dataCenters element instead. 7.282. StorageDomainLease struct Represents a lease residing on a storage domain. A lease is a Sanlock resource residing on a special volume on the storage domain, this Sanlock resource is used to provide storage base locking. Table 7.368. Links summary Name Type Summary storage_domain StorageDomain Reference to the storage domain on which the lock resides on. 7.283. StorageDomainStatus enum Table 7.369. Values summary Name Summary activating active detaching inactive locked maintenance mixed preparing_for_maintenance unattached unknown 7.284. StorageDomainType enum Indicates the kind of data managed by a storage domain . Table 7.370. Values summary Name Summary data Data domains are used to store the disks and snapshots of the virtual machines and templates in the system. export Export domains are temporary storage repositories used to copy and move virtual machines and templates between data centers and Red Hat Virtualization environments. image Image domain store images that can be imported into from an external system. iso ISO domains store ISO files (or logical CDs) used to install and boot operating systems and applications for the virtual machines. managed_block_storage Managed block storage domains are created on block storage devices. volume Volume domains store logical volumes that can be used as disks for virtual machines. 7.284.1. data Data domains are used to store the disks and snapshots of the virtual machines and templates in the system. In addition, snapshots of the disks are also stored in data domains. Data domains cannot be shared across data centers. 7.284.2. export Export domains are temporary storage repositories used to copy and move virtual machines and templates between data centers and Red Hat Virtualization environments. Export domains can also be used to backup virtual machines. An export domain can be moved between data centers but it can only be active in one data center at a time. 7.284.3. image Image domain store images that can be imported into from an external system. For example, images from an OpenStack Glance image repository. 7.284.4. iso ISO domains store ISO files (or logical CDs) used to install and boot operating systems and applications for the virtual machines. ISO domains remove the data center's need for physical media. An ISO domain can be shared across different data centers. 7.284.5. managed_block_storage Managed block storage domains are created on block storage devices. These domains are accessed and managed by cinder. 7.284.6. volume Volume domains store logical volumes that can be used as disks for virtual machines. For example, volumes from an OpenStack Cincer block storage service. 7.285. StorageFormat enum Type which represents a format of storage domain . Table 7.371. Values summary Name Summary v1 Version 1 of the storage domain format is applicable to NFS, iSCSI and FC storage domains. v2 Version 2 of the storage domain format is applicable to iSCSI and FC storage domains. v3 Version 3 of the storage domain format is applicable to NFS, POSIX, iSCSI and FC storage domains. v4 Version 4 of the storage domain format. v5 Version 5 of the storage domain format is applicable to NFS, POSIX, and Gluster storage domains. 7.285.1. v1 Version 1 of the storage domain format is applicable to NFS, iSCSI and FC storage domains. Each storage domain contains metadata describing its own structure, and all of the names of physical volumes that are used to back virtual machine disk images. Master domains additionally contain metadata for all the domains and physical volume names in the storage pool. The total size of this metadata is limited to 2 KiB, limiting the number of storage domains that can be in a pool. Template and virtual machine base images are read only. 7.285.2. v2 Version 2 of the storage domain format is applicable to iSCSI and FC storage domains. All storage domain and pool metadata is stored as logical volume tags rather than written to a logical volume. Metadata about virtual machine disk volumes is still stored in a logical volume on the domains. Physical volume names are no longer included in the metadata. Template and virtual machine base images are read only. 7.285.3. v3 Version 3 of the storage domain format is applicable to NFS, POSIX, iSCSI and FC storage domains. All storage domain and pool metadata is stored as logical volume tags rather than written to a logical volume. Metadata about virtual machine disk volumes is still stored in a logical volume on the domains. Virtual machine and template base images are no longer read only. This change enables live snapshots, live storage migration, and clone from snapshot. Support for Unicode metadata is added, for non-English volume names. 7.285.4. v5 Version 5 of the storage domain format is applicable to NFS, POSIX, and Gluster storage domains. Added support for 4096 bytes block sizes and variable sanlock alignments. 7.286. StorageType enum Type representing a storage domain type. Table 7.372. Values summary Name Summary cinder Cinder storage domain. fcp Fibre-Channel storage domain. glance Glance storage domain. glusterfs Gluster-FS storage domain. iscsi iSCSI storage domain. localfs Storage domain on Local storage. managed_block_storage Managed block storage domain. nfs NFS storage domain. posixfs POSIX-FS storage domain. 7.286.1. cinder Cinder storage domain. For more details on Cinder please go to Cinder . 7.286.2. glance Glance storage domain. For more details on Glance please go to Glance . 7.286.3. glusterfs Gluster-FS storage domain. For more details on Gluster please go to Gluster . 7.286.4. managed_block_storage Managed block storage domain. A storage domain managed using cinderlib. For supported storage drivers, see Available Drivers . 7.287. SwitchType enum Describes all switch types supported by the Manager. Table 7.373. Values summary Name Summary legacy The native switch type. ovs The Open vSwitch type. 7.288. SystemOption struct Type representing a configuration option of the system. Table 7.374. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. values SystemOptionValue[ ] Values of the option for various system versions. 7.289. SystemOptionValue struct Type representing a pair of value and version of a configuration option. Table 7.375. Attributes summary Name Type Summary value String Configuration option's value for specific version. version String Configuration option's version. 7.290. Tag struct Represents a tag in the system. Table 7.376. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.377. Links summary Name Type Summary group Group Reference to the group which has this tag assigned. host Host Reference to the host which has this tag assigned. parent Tag Reference to the parent tag of this tag. template Template Reference to the template which has this tag assigned. user User Reference to the user who has this tag assigned. vm Vm Reference to the virtual machine which has this tag assigned. 7.291. Template struct The type that represents a virtual machine template. Templates allow for a rapid instantiation of virtual machines with common configuration and disk states. Table 7.378. Attributes summary Name Type Summary auto_pinning_policy AutoPinningPolicy Specifies if and how the auto CPU and NUMA configuration is applied. bios Bios Reference to virtual machine's BIOS configuration. comment String Free text containing comments about this object. console Console Console configured for this virtual machine. cpu Cpu The configuration of the virtual machine CPU. cpu_pinning_policy CpuPinningPolicy Specifies if and how the CPU and NUMA configuration is applied. cpu_shares Integer creation_time Date The virtual machine creation date. custom_compatibility_version Version Virtual machine custom compatibility version. custom_cpu_model String custom_emulated_machine String custom_properties CustomProperty[ ] Properties sent to VDSM to configure various hooks. delete_protected Boolean If true , the virtual machine cannot be deleted. description String A human-readable description in plain text. display Display The virtual machine display configuration. domain Domain Domain configured for this virtual machine. high_availability HighAvailability The virtual machine high availability configuration. id String A unique identifier. initialization Initialization Reference to the virtual machine's initialization configuration. io Io For performance tuning of IO threading. large_icon Icon Virtual machine's large icon. lease StorageDomainLease Reference to the storage domain this virtual machine/template lease reside on. memory Integer The virtual machine's memory, in bytes. memory_policy MemoryPolicy Reference to virtual machine's memory management configuration. migration MigrationOptions Reference to configuration of migration of a running virtual machine to another host. migration_downtime Integer Maximum time the virtual machine can be non responsive during its live migration to another host in ms. multi_queues_enabled Boolean If true , each virtual interface will get the optimal number of queues, depending on the available virtual Cpus. name String A human-readable name in plain text. origin String The origin of this virtual machine. os OperatingSystem Operating system type installed on the virtual machine. placement_policy VmPlacementPolicy The configuration of the virtual machine's placement policy. rng_device RngDevice Random Number Generator device configuration for this virtual machine. serial_number SerialNumber Virtual machine's serial number in a cluster. small_icon Icon Virtual machine's small icon. soundcard_enabled Boolean If true , the sound card is added to the virtual machine. sso Sso Reference to the Single Sign On configuration this virtual machine is configured for. start_paused Boolean If true , the virtual machine will be initially in 'paused' state after start. stateless Boolean If true , the virtual machine is stateless - it's state (disks) are rolled-back after shutdown. status TemplateStatus The status of the template. storage_error_resume_behaviour VmStorageErrorResumeBehaviour Determines how the virtual machine will be resumed after storage error. time_zone TimeZone The virtual machine's time zone set by oVirt. tpm_enabled Boolean If true , a TPM device is added to the virtual machine. tunnel_migration Boolean If true , the network data transfer will be encrypted during virtual machine live migration. type VmType Determines whether the virtual machine is optimized for desktop or server. usb Usb Configuration of USB devices for this virtual machine (count, type). version TemplateVersion Indicates whether this is the base version or a sub-version of another template. virtio_scsi VirtioScsi Reference to VirtIO SCSI configuration. virtio_scsi_multi_queues Integer Number of queues for a Virtio-SCSI contoller this field requires virtioScsiMultiQueuesEnabled to be true see virtioScsiMultiQueuesEnabled for more info virtio_scsi_multi_queues_enabled Boolean If true , the Virtio-SCSI devices will obtain a number of multiple queues depending on the available virtual Cpus and disks, or according to the specified virtioScsiMultiQueues. vm Vm The virtual machine configuration associated with this template. 7.291.1. auto_pinning_policy Specifies if and how the auto CPU and NUMA configuration is applied. Important Since version 4.5 of the engine this operation is deprecated, and preserved only for backwards compatibility. It might be removed in the future. Please use CpuPinningPolicy instead. 7.291.2. cpu The configuration of the virtual machine CPU. The socket configuration can be updated without rebooting the virtual machine. The cores and the threads require a reboot. For example, to change the number of sockets to 4 immediately, and the number of cores and threads to 2 after reboot, send the following request: With a request body: <vm> <cpu> <topology> <sockets>4</sockets> <cores>2</cores> <threads>2</threads> </topology> </cpu> </vm> 7.291.3. cpu_pinning_policy Specifies if and how the CPU and NUMA configuration is applied. When not specified the behavior of CPU pinning string will determine CpuPinningPolicy to None or Manual. 7.291.4. custom_compatibility_version Virtual machine custom compatibility version. Enables a virtual machine to be customized to its own compatibility version. If custom_compatibility_version is set, it overrides the cluster's compatibility version for this particular virtual machine. The compatibility version of a virtual machine is limited by the data center the virtual machine resides in, and is checked against capabilities of the host the virtual machine is planned to run on. 7.291.5. high_availability The virtual machine high availability configuration. If set, the virtual machine will be automatically restarted when it unexpectedly goes down. 7.291.6. initialization Reference to the virtual machine's initialization configuration. Note Since Red Hat Virtualization 4.1.8 this property can be cleared by sending an empty tag. For example, to clear the initialization attribute send a request like this: With a request body like this: <vm> <initialization/> </vm> The response to such a request, and requests with the header All-Content: true will still contain this attribute. 7.291.7. large_icon Virtual machine's large icon. Either set by user or refers to image set according to operating system. 7.291.8. lease Reference to the storage domain this virtual machine/template lease reside on. A virtual machine running with a lease requires checking while running that the lease is not taken by another host, preventing another instance of this virtual machine from running on another host. This provides protection against split-brain in highly available virtual machines. A template can also have a storage domain defined for a lease in order to have the virtual machines created from this template to be preconfigured with this storage domain as the location of the leases. 7.291.9. memory The virtual machine's memory, in bytes. For example, to update a virtual machine to contain 1 Gibibyte (GiB) of memory, send the following request: With the following request body: <vm> <memory>1073741824</memory> </vm> Memory hot plug is supported from Red Hat Virtualization 3.6 onwards. You can use the example above to increase memory while the virtual machine is in state up . The size increment must be dividable by the value of the HotPlugMemoryBlockSizeMb configuration value (256 MiB by default). If the memory size increment is not dividable by this value, the memory size change is only stored to run configuration. Each successful memory hot plug operation creates one or two new memory devices. Memory hot unplug is supported since Red Hat Virtualization 4.2 onwards. Memory hot unplug can only be performed when the virtual machine is in state up . Only previously hot plugged memory devices can be removed by the hot unplug operation. The requested memory decrement is rounded down to match sizes of a combination of previously hot plugged memory devices. The requested memory value is stored to run configuration without rounding. Note Memory in the example is converted to bytes using the following formula: 1 GiB = 2 30 bytes = 1073741824 bytes. Note Red Hat Virtualization Manager internally rounds values down to whole MiBs (1MiB = 2 20 bytes) 7.291.10. migration Reference to configuration of migration of a running virtual machine to another host. Note API for querying migration policy by ID returned by this method is not implemented yet. Use /ovirt-engine/api/options/MigrationPolicies to get a list of all migration policies with their IDs. 7.291.11. migration_downtime Maximum time the virtual machine can be non responsive during its live migration to another host in ms. Set either explicitly for the virtual machine or by engine-config -s DefaultMaximumMigrationDowntime=[value] 7.291.12. origin The origin of this virtual machine. Possible values: ovirt rhev vmware xen external hosted_engine managed_hosted_engine kvm physical_machine hyperv 7.291.13. placement_policy The configuration of the virtual machine's placement policy. This configuration can be updated to pin a virtual machine to one or more hosts. Note Virtual machines that are pinned to multiple hosts cannot be live migrated, but in the event of a host failure, any virtual machine configured to be highly available is automatically restarted on one of the other hosts to which the virtual machine is pinned. For example, to pin a virtual machine to two hosts, send the following request: With a request body like this: <vm> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <placement_policy> <hosts> <host> <name>Host1</name> </host> <host> <name>Host2</name> </host> </hosts> <affinity>pinned</affinity> </placement_policy> </vm> 7.291.14. small_icon Virtual machine's small icon. Either set by user or refers to image set according to operating system. 7.291.15. sso Reference to the Single Sign On configuration this virtual machine is configured for. The user can be automatically signed in the virtual machine's operating system when console is opened. 7.291.16. tpm_enabled If true , a TPM device is added to the virtual machine. By default the value is false . This property is only visible when fetching if "All-Content=true" header is set. Table 7.379. Links summary Name Type Summary cdroms Cdrom[ ] Reference to the CD-ROM devices attached to the template. cluster Cluster Reference to cluster the virtual machine belongs to. cpu_profile CpuProfile Reference to CPU profile used by this virtual machine. disk_attachments DiskAttachment[ ] Reference to the disks attached to the template. graphics_consoles GraphicsConsole[ ] Reference to the graphic consoles attached to the template. mediated_devices VmMediatedDevice[ ] Mediated devices configuration. nics Nic[ ] Reference to the network interfaces attached to the template. permissions Permission[ ] Reference to the user permissions attached to the template. quota Quota Reference to quota configuration set for this virtual machine. storage_domain StorageDomain Reference to storage domain the virtual machine belongs to. tags Tag[ ] Reference to the tags attached to the template. watchdogs Watchdog[ ] Reference to the watchdog devices attached to the template. 7.292. TemplateStatus enum Type representing a status of a virtual machine template. Table 7.380. Values summary Name Summary illegal This status indicates that at least one of the disks of the template is illegal. locked This status indicates that some operation that prevents other operations with the template is being executed. ok This status indicates that the template is valid and ready for use. 7.293. TemplateVersion struct Type representing a version of a virtual machine template. Table 7.381. Attributes summary Name Type Summary version_name String The name of this version. version_number Integer The index of this version in the versions hierarchy of the template. 7.293.1. version_number The index of this version in the versions hierarchy of the template. The index 1 represents the original version of a template that is also called base version. Table 7.382. Links summary Name Type Summary base_template Template References the template that this version is associated with. 7.294. Ticket struct Type representing a ticket that allows virtual machine access. Table 7.383. Attributes summary Name Type Summary expiry Integer Time to live for the ticket in seconds. value String The virtual machine access ticket. 7.295. TimeZone struct Time zone representation. Table 7.384. Attributes summary Name Type Summary name String Name of the time zone. utc_offset String UTC offset. 7.295.1. utc_offset UTC offset. Offset from UTC . 7.296. TpmSupport enum Table 7.385. Values summary Name Summary required TPM is required by the operating system supported TPM is supported but optional unsupported 7.297. TransparentHugePages struct Type representing a transparent huge pages (THP) support. Table 7.386. Attributes summary Name Type Summary enabled Boolean Enable THP support. 7.298. TransportType enum Protocol used to access a Gluster volume. Table 7.387. Values summary Name Summary rdma Remote direct memory access. tcp TCP. 7.299. UnmanagedNetwork struct Table 7.388. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. Table 7.389. Links summary Name Type Summary host Host host_nic HostNic 7.300. Usb struct Configuration of the USB device of a virtual machine. Table 7.390. Attributes summary Name Type Summary enabled Boolean Determines whether the USB device should be included or not. type UsbType USB type, currently only native is supported. 7.301. UsbType enum Type of USB device redirection. Table 7.391. Values summary Name Summary legacy Legacy USB redirection. native Native USB redirection. 7.301.1. legacy Legacy USB redirection. This USB type has been deprecated since version 3.6 of the engine, and has been completely removed in version 4.1. It is preserved only to avoid syntax errors in existing scripts. If it is used it will be automatically replaced by native . 7.301.2. native Native USB redirection. Native USB redirection allows KVM/SPICE USB redirection for Linux and Windows virtual machines. Virtual (guest) machines require no guest-installed agents or drivers for native USB. On Linux clients, all packages required for USB redirection are provided by the virt-viewer package. On Windows clients, you must also install the usbdk package. 7.302. User struct Represents a user in the system. Table 7.392. Attributes summary Name Type Summary comment String Free text containing comments about this object. department String description String A human-readable description in plain text. domain_entry_id String email String id String A unique identifier. last_name String logged_in Boolean name String A human-readable name in plain text. namespace String Namespace where the user resides. password String principal String Similar to user_name . user_name String The user's username. user_options Property[ ] User options allow you to save key/value properties which are used to customize the settings per individual user. 7.302.1. namespace Namespace where the user resides. When using the authorization provider that stores users in the LDAP server, this attribute equals the naming context of the LDAP server. See oVirt Engine Extension AAA LDAP for more information. When using the built-in authorization provider that stores users in the database this attribute is ignored. See oVirt Engine extension - AAA - JDBC for more information. 7.302.2. principal Similar to user_name . The format depends on the LDAP provider. With most LDAP providers it is the value of the uid LDAP attribute. In the case of Active Directory it is the User Principal Name (UPN). 7.302.3. user_name The user's username. The format depends on authorization provider type. In most LDAP providers it is the value of the uid LDAP attribute. In Active Directory it is the User Principal Name (UPN). UPN or uid must be followed by the authorization provider name. For example, in the case of LDAP's uid attribute it is: myuser@myextension-authz . In the case of Active Directory using UPN it is: [email protected]@myextension-authz . This attribute is a required parameter when adding a new user. 7.302.4. user_options User options allow you to save key/value properties which are used to customize the settings per individual user. Note that since version 4.4.5 this property is deprecated and preserved only for backwards compatibility. It will be removed in the future. Please use the options endpoint instead. Table 7.393. Links summary Name Type Summary domain Domain groups Group[ ] options UserOption[ ] permissions Permission[ ] roles Role[ ] A link to the roles sub-collection for user resources. ssh_public_keys SshPublicKey[ ] tags Tag[ ] A link to the tags sub-collection for user resources. 7.303. UserOption struct User options allow you to save key/value properties which are used to customize the settings per individual user. Table 7.394. Attributes summary Name Type Summary comment String Free text containing comments about this object. content String JSON content encoded as string. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. 7.303.1. content JSON content encoded as string. Any valid JSON is supported. Table 7.395. Links summary Name Type Summary user User 7.304. Value struct Table 7.396. Attributes summary Name Type Summary datum Decimal detail String 7.305. ValueType enum Table 7.397. Values summary Name Summary decimal integer string 7.306. VcpuPin struct Table 7.398. Attributes summary Name Type Summary cpu_set String vcpu Integer 7.307. Vendor struct Table 7.399. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. 7.308. Version struct Table 7.400. Attributes summary Name Type Summary build Integer comment String Free text containing comments about this object. description String A human-readable description in plain text. full_version String id String A unique identifier. major Integer minor Integer name String A human-readable name in plain text. revision Integer 7.309. VgpuPlacement enum The vGPU placement strategy. It can either put vGPUs on the first available physical cards, or spread them over multiple physical cards. Table 7.401. Values summary Name Summary consolidated Use consolidated placement. separated Use separated placement. 7.309.1. consolidated Use consolidated placement. Each vGPU is placed on the first physical card with available space. This is the default placement, utilizing all available space on the physical cards. 7.309.2. separated Use separated placement. Each vGPU is placed on a separate physical card, if possible. This can be useful for improving vGPU performance. 7.310. VirtioScsi struct Type representing the support of virtio-SCSI. If it supported we use virtio driver for SCSI guest device. Table 7.402. Attributes summary Name Type Summary enabled Boolean Enable Virtio SCSI support. 7.311. VirtualNumaNode struct Represents the virtual NUMA node. An example XML representation: <vm_numa_node href="/ovirt-engine/api/vms/123/numanodes/456" id="456"> <cpu> <cores> <core> <index>0</index> </core> </cores> </cpu> <index>0</index> <memory>1024</memory> <numa_node_pins> <numa_node_pin> <index>0</index> </numa_node_pin> </numa_node_pins> <vm href="/ovirt-engine/api/vms/123" id="123" /> </vm_numa_node> Table 7.403. Attributes summary Name Type Summary comment String Free text containing comments about this object. cpu Cpu description String A human-readable description in plain text. id String A unique identifier. index Integer memory Integer Memory of the NUMA node in MB. name String A human-readable name in plain text. node_distance String numa_node_pins NumaNodePin[ ] numa_tune_mode NumaTuneMode How the NUMA topology is applied. Table 7.404. Links summary Name Type Summary host Host statistics Statistic[ ] Each host NUMA node resource exposes a statistics sub-collection for host NUMA node specific statistics. vm Vm 7.311.1. statistics Each host NUMA node resource exposes a statistics sub-collection for host NUMA node specific statistics. An example of an XML representation: <statistics> <statistic href="/ovirt-engine/api/hosts/123/numanodes/456/statistics/789" id="789"> <name>memory.total</name> <description>Total memory</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>25165824000</datum> </value> </values> <host_numa_node href="/ovirt-engine/api/hosts/123/numanodes/456" id="456" /> </statistic> ... </statistics> Note This statistics sub-collection is read-only. The following list shows the statistic types for a host NUMA node: Name Description memory.total Total memory in bytes on the NUMA node. memory.used Memory in bytes used on the NUMA node. memory.free Memory in bytes free on the NUMA node. cpu.current.user Percentage of CPU usage for user slice. cpu.current.system Percentage of CPU usage for system. cpu.current.idle Percentage of idle CPU usage. 7.312. Vlan struct Type representing a Virtual LAN (VLAN) type. Table 7.405. Attributes summary Name Type Summary id Integer Virtual LAN ID. 7.313. Vm struct Represents a virtual machine. Table 7.406. Attributes summary Name Type Summary auto_pinning_policy AutoPinningPolicy Specifies if and how the auto CPU and NUMA configuration is applied. bios Bios Reference to virtual machine's BIOS configuration. comment String Free text containing comments about this object. console Console Console configured for this virtual machine. cpu Cpu The configuration of the virtual machine CPU. cpu_pinning_policy CpuPinningPolicy Specifies if and how the CPU and NUMA configuration is applied. cpu_shares Integer creation_time Date The virtual machine creation date. custom_compatibility_version Version Virtual machine custom compatibility version. custom_cpu_model String custom_emulated_machine String custom_properties CustomProperty[ ] Properties sent to VDSM to configure various hooks. delete_protected Boolean If true , the virtual machine cannot be deleted. description String A human-readable description in plain text. display Display The virtual machine display configuration. domain Domain Domain configured for this virtual machine. fqdn String Fully qualified domain name of the virtual machine. guest_operating_system GuestOperatingSystem What operating system is installed on the virtual machine. guest_time_zone TimeZone What time zone is used by the virtual machine (as returned by guest agent). has_illegal_images Boolean Indicates whether the virtual machine has snapshots with disks in ILLEGAL state. high_availability HighAvailability The virtual machine high availability configuration. id String A unique identifier. initialization Initialization Reference to the virtual machine's initialization configuration. io Io For performance tuning of IO threading. large_icon Icon Virtual machine's large icon. lease StorageDomainLease Reference to the storage domain this virtual machine/template lease reside on. memory Integer The virtual machine's memory, in bytes. memory_policy MemoryPolicy Reference to virtual machine's memory management configuration. migration MigrationOptions Reference to configuration of migration of a running virtual machine to another host. migration_downtime Integer Maximum time the virtual machine can be non responsive during its live migration to another host in ms. multi_queues_enabled Boolean If true , each virtual interface will get the optimal number of queues, depending on the available virtual Cpus. name String A human-readable name in plain text. next_run_configuration_exists Boolean Virtual machine configuration has been changed and requires restart of the virtual machine. numa_tune_mode NumaTuneMode How the NUMA topology is applied. origin String The origin of this virtual machine. os OperatingSystem Operating system type installed on the virtual machine. payloads Payload[ ] Optional payloads of the virtual machine, used for ISOs to configure it. placement_policy VmPlacementPolicy The configuration of the virtual machine's placement policy. rng_device RngDevice Random Number Generator device configuration for this virtual machine. run_once Boolean If true , the virtual machine has been started using the run once command, meaning it's configuration might differ from the stored one for the purpose of this single run. serial_number SerialNumber Virtual machine's serial number in a cluster. small_icon Icon Virtual machine's small icon. soundcard_enabled Boolean If true , the sound card is added to the virtual machine. sso Sso Reference to the Single Sign On configuration this virtual machine is configured for. start_paused Boolean If true , the virtual machine will be initially in 'paused' state after start. start_time Date The date in which the virtual machine was started. stateless Boolean If true , the virtual machine is stateless - it's state (disks) are rolled-back after shutdown. status VmStatus The current status of the virtual machine. status_detail String Human readable detail of current status. stop_reason String The reason the virtual machine was stopped. stop_time Date The date in which the virtual machine was stopped. storage_error_resume_behaviour VmStorageErrorResumeBehaviour Determines how the virtual machine will be resumed after storage error. time_zone TimeZone The virtual machine's time zone set by oVirt. tpm_enabled Boolean If true , a TPM device is added to the virtual machine. tunnel_migration Boolean If true , the network data transfer will be encrypted during virtual machine live migration. type VmType Determines whether the virtual machine is optimized for desktop or server. usb Usb Configuration of USB devices for this virtual machine (count, type). use_latest_template_version Boolean If true , the virtual machine is reconfigured to the latest version of it's template when it is started. virtio_scsi VirtioScsi Reference to VirtIO SCSI configuration. virtio_scsi_multi_queues Integer Number of queues for a Virtio-SCSI contoller this field requires virtioScsiMultiQueuesEnabled to be true see virtioScsiMultiQueuesEnabled for more info virtio_scsi_multi_queues_enabled Boolean If true , the Virtio-SCSI devices will obtain a number of multiple queues depending on the available virtual Cpus and disks, or according to the specified virtioScsiMultiQueues. 7.313.1. auto_pinning_policy Specifies if and how the auto CPU and NUMA configuration is applied. Important Since version 4.5 of the engine this operation is deprecated, and preserved only for backwards compatibility. It might be removed in the future. Please use CpuPinningPolicy instead. 7.313.2. cpu The configuration of the virtual machine CPU. The socket configuration can be updated without rebooting the virtual machine. The cores and the threads require a reboot. For example, to change the number of sockets to 4 immediately, and the number of cores and threads to 2 after reboot, send the following request: With a request body: <vm> <cpu> <topology> <sockets>4</sockets> <cores>2</cores> <threads>2</threads> </topology> </cpu> </vm> 7.313.3. cpu_pinning_policy Specifies if and how the CPU and NUMA configuration is applied. When not specified the behavior of CPU pinning string will determine CpuPinningPolicy to None or Manual. 7.313.4. custom_compatibility_version Virtual machine custom compatibility version. Enables a virtual machine to be customized to its own compatibility version. If custom_compatibility_version is set, it overrides the cluster's compatibility version for this particular virtual machine. The compatibility version of a virtual machine is limited by the data center the virtual machine resides in, and is checked against capabilities of the host the virtual machine is planned to run on. 7.313.5. high_availability The virtual machine high availability configuration. If set, the virtual machine will be automatically restarted when it unexpectedly goes down. 7.313.6. initialization Reference to the virtual machine's initialization configuration. Note Since Red Hat Virtualization 4.1.8 this property can be cleared by sending an empty tag. For example, to clear the initialization attribute send a request like this: With a request body like this: <vm> <initialization/> </vm> The response to such a request, and requests with the header All-Content: true will still contain this attribute. 7.313.7. large_icon Virtual machine's large icon. Either set by user or refers to image set according to operating system. 7.313.8. lease Reference to the storage domain this virtual machine/template lease reside on. A virtual machine running with a lease requires checking while running that the lease is not taken by another host, preventing another instance of this virtual machine from running on another host. This provides protection against split-brain in highly available virtual machines. A template can also have a storage domain defined for a lease in order to have the virtual machines created from this template to be preconfigured with this storage domain as the location of the leases. 7.313.9. memory The virtual machine's memory, in bytes. For example, to update a virtual machine to contain 1 Gibibyte (GiB) of memory, send the following request: With the following request body: <vm> <memory>1073741824</memory> </vm> Memory hot plug is supported from Red Hat Virtualization 3.6 onwards. You can use the example above to increase memory while the virtual machine is in state up . The size increment must be dividable by the value of the HotPlugMemoryBlockSizeMb configuration value (256 MiB by default). If the memory size increment is not dividable by this value, the memory size change is only stored to run configuration. Each successful memory hot plug operation creates one or two new memory devices. Memory hot unplug is supported since Red Hat Virtualization 4.2 onwards. Memory hot unplug can only be performed when the virtual machine is in state up . Only previously hot plugged memory devices can be removed by the hot unplug operation. The requested memory decrement is rounded down to match sizes of a combination of previously hot plugged memory devices. The requested memory value is stored to run configuration without rounding. Note Memory in the example is converted to bytes using the following formula: 1 GiB = 2 30 bytes = 1073741824 bytes. Note Red Hat Virtualization Manager internally rounds values down to whole MiBs (1MiB = 2 20 bytes) 7.313.10. migration Reference to configuration of migration of a running virtual machine to another host. Note API for querying migration policy by ID returned by this method is not implemented yet. Use /ovirt-engine/api/options/MigrationPolicies to get a list of all migration policies with their IDs. 7.313.11. migration_downtime Maximum time the virtual machine can be non responsive during its live migration to another host in ms. Set either explicitly for the virtual machine or by engine-config -s DefaultMaximumMigrationDowntime=[value] 7.313.12. next_run_configuration_exists Virtual machine configuration has been changed and requires restart of the virtual machine. Changed configuration is applied at processing the virtual machine's shut down . 7.313.13. numa_tune_mode How the NUMA topology is applied. Deprecated in favor of NUMA tune per vNUMA node. 7.313.14. origin The origin of this virtual machine. Possible values: ovirt rhev vmware xen external hosted_engine managed_hosted_engine kvm physical_machine hyperv 7.313.15. placement_policy The configuration of the virtual machine's placement policy. This configuration can be updated to pin a virtual machine to one or more hosts. Note Virtual machines that are pinned to multiple hosts cannot be live migrated, but in the event of a host failure, any virtual machine configured to be highly available is automatically restarted on one of the other hosts to which the virtual machine is pinned. For example, to pin a virtual machine to two hosts, send the following request: With a request body like this: <vm> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <placement_policy> <hosts> <host> <name>Host1</name> </host> <host> <name>Host2</name> </host> </hosts> <affinity>pinned</affinity> </placement_policy> </vm> 7.313.16. small_icon Virtual machine's small icon. Either set by user or refers to image set according to operating system. 7.313.17. sso Reference to the Single Sign On configuration this virtual machine is configured for. The user can be automatically signed in the virtual machine's operating system when console is opened. 7.313.18. stop_reason The reason the virtual machine was stopped. Optionally set by user when shutting down the virtual machine. 7.313.19. tpm_enabled If true , a TPM device is added to the virtual machine. By default the value is false . This property is only visible when fetching if "All-Content=true" header is set. Table 7.407. Links summary Name Type Summary affinity_labels AffinityLabel[ ] Optional. applications Application[ ] List of applications installed on the virtual machine. cdroms Cdrom[ ] Reference to the ISO mounted to the CDROM. cluster Cluster Reference to cluster the virtual machine belongs to. cpu_profile CpuProfile Reference to CPU profile used by this virtual machine. disk_attachments DiskAttachment[ ] References the disks attached to the virtual machine. dynamic_cpu DynamicCpu The dynamic configuration of the virtual machine CPU. external_host_provider ExternalHostProvider floppies Floppy[ ] Reference to the ISO mounted to the floppy. graphics_consoles GraphicsConsole[ ] List of graphics consoles configured for this virtual machine. host Host Reference to the host the virtual machine is running on. host_devices HostDevice[ ] References devices associated to this virtual machine. instance_type InstanceType The virtual machine configuration can be optionally predefined via one of the instance types. katello_errata KatelloErratum[ ] Lists all the Katello errata assigned to the virtual machine. mediated_devices VmMediatedDevice[ ] Mediated devices configuration. nics Nic[ ] References the list of network interface devices on the virtual machine. numa_nodes NumaNode[ ] Refers to the NUMA Nodes configuration used by this virtual machine. original_template Template References the original template used to create the virtual machine. permissions Permission[ ] Permissions set for this virtual machine. quota Quota Reference to quota configuration set for this virtual machine. reported_devices ReportedDevice[ ] sessions Session[ ] List of user sessions opened for this virtual machine. snapshots Snapshot[ ] Refers to all snapshots taken from the virtual machine. statistics Statistic[ ] Statistics data collected from this virtual machine. storage_domain StorageDomain Reference to storage domain the virtual machine belongs to. tags Tag[ ] template Template Reference to the template the virtual machine is based on. vm_pool VmPool Reference to the pool the virtual machine is optionally member of. watchdogs Watchdog[ ] Refers to the Watchdog configuration. 7.313.20. affinity_labels Optional. Used for labeling of sub-clusters. 7.313.21. katello_errata Lists all the Katello errata assigned to the virtual machine. You will receive response in XML like this one: <katello_errata> <katello_erratum href="/ovirt-engine/api/katelloerrata/456" id="456"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> ... </packages> </katello_erratum> ... </katello_errata> 7.313.22. original_template References the original template used to create the virtual machine. If the virtual machine is cloned from a template or another virtual machine, the template links to the Blank template, and the original_template is used to track history. Otherwise the template and original_template are the same. 7.313.23. statistics Statistics data collected from this virtual machine. Note that some statistics, notably memory.buffered and memory.cached are available only when Red Hat Virtualization guest agent is installed in the virtual machine. 7.314. VmAffinity enum Table 7.408. Values summary Name Summary migratable pinned user_migratable 7.315. VmBase struct Represents basic virtual machine configuration. This is used by virtual machines, templates and instance types. Table 7.409. Attributes summary Name Type Summary auto_pinning_policy AutoPinningPolicy Specifies if and how the auto CPU and NUMA configuration is applied. bios Bios Reference to virtual machine's BIOS configuration. comment String Free text containing comments about this object. console Console Console configured for this virtual machine. cpu Cpu The configuration of the virtual machine CPU. cpu_pinning_policy CpuPinningPolicy Specifies if and how the CPU and NUMA configuration is applied. cpu_shares Integer creation_time Date The virtual machine creation date. custom_compatibility_version Version Virtual machine custom compatibility version. custom_cpu_model String custom_emulated_machine String custom_properties CustomProperty[ ] Properties sent to VDSM to configure various hooks. delete_protected Boolean If true , the virtual machine cannot be deleted. description String A human-readable description in plain text. display Display The virtual machine display configuration. domain Domain Domain configured for this virtual machine. high_availability HighAvailability The virtual machine high availability configuration. id String A unique identifier. initialization Initialization Reference to the virtual machine's initialization configuration. io Io For performance tuning of IO threading. large_icon Icon Virtual machine's large icon. lease StorageDomainLease Reference to the storage domain this virtual machine/template lease reside on. memory Integer The virtual machine's memory, in bytes. memory_policy MemoryPolicy Reference to virtual machine's memory management configuration. migration MigrationOptions Reference to configuration of migration of a running virtual machine to another host. migration_downtime Integer Maximum time the virtual machine can be non responsive during its live migration to another host in ms. multi_queues_enabled Boolean If true , each virtual interface will get the optimal number of queues, depending on the available virtual Cpus. name String A human-readable name in plain text. origin String The origin of this virtual machine. os OperatingSystem Operating system type installed on the virtual machine. placement_policy VmPlacementPolicy The configuration of the virtual machine's placement policy. rng_device RngDevice Random Number Generator device configuration for this virtual machine. serial_number SerialNumber Virtual machine's serial number in a cluster. small_icon Icon Virtual machine's small icon. soundcard_enabled Boolean If true , the sound card is added to the virtual machine. sso Sso Reference to the Single Sign On configuration this virtual machine is configured for. start_paused Boolean If true , the virtual machine will be initially in 'paused' state after start. stateless Boolean If true , the virtual machine is stateless - it's state (disks) are rolled-back after shutdown. storage_error_resume_behaviour VmStorageErrorResumeBehaviour Determines how the virtual machine will be resumed after storage error. time_zone TimeZone The virtual machine's time zone set by oVirt. tpm_enabled Boolean If true , a TPM device is added to the virtual machine. tunnel_migration Boolean If true , the network data transfer will be encrypted during virtual machine live migration. type VmType Determines whether the virtual machine is optimized for desktop or server. usb Usb Configuration of USB devices for this virtual machine (count, type). virtio_scsi VirtioScsi Reference to VirtIO SCSI configuration. virtio_scsi_multi_queues Integer Number of queues for a Virtio-SCSI contoller this field requires virtioScsiMultiQueuesEnabled to be true see virtioScsiMultiQueuesEnabled for more info virtio_scsi_multi_queues_enabled Boolean If true , the Virtio-SCSI devices will obtain a number of multiple queues depending on the available virtual Cpus and disks, or according to the specified virtioScsiMultiQueues. 7.315.1. auto_pinning_policy Specifies if and how the auto CPU and NUMA configuration is applied. Important Since version 4.5 of the engine this operation is deprecated, and preserved only for backwards compatibility. It might be removed in the future. Please use CpuPinningPolicy instead. 7.315.2. cpu The configuration of the virtual machine CPU. The socket configuration can be updated without rebooting the virtual machine. The cores and the threads require a reboot. For example, to change the number of sockets to 4 immediately, and the number of cores and threads to 2 after reboot, send the following request: With a request body: <vm> <cpu> <topology> <sockets>4</sockets> <cores>2</cores> <threads>2</threads> </topology> </cpu> </vm> 7.315.3. cpu_pinning_policy Specifies if and how the CPU and NUMA configuration is applied. When not specified the behavior of CPU pinning string will determine CpuPinningPolicy to None or Manual. 7.315.4. custom_compatibility_version Virtual machine custom compatibility version. Enables a virtual machine to be customized to its own compatibility version. If custom_compatibility_version is set, it overrides the cluster's compatibility version for this particular virtual machine. The compatibility version of a virtual machine is limited by the data center the virtual machine resides in, and is checked against capabilities of the host the virtual machine is planned to run on. 7.315.5. high_availability The virtual machine high availability configuration. If set, the virtual machine will be automatically restarted when it unexpectedly goes down. 7.315.6. initialization Reference to the virtual machine's initialization configuration. Note Since Red Hat Virtualization 4.1.8 this property can be cleared by sending an empty tag. For example, to clear the initialization attribute send a request like this: With a request body like this: <vm> <initialization/> </vm> The response to such a request, and requests with the header All-Content: true will still contain this attribute. 7.315.7. large_icon Virtual machine's large icon. Either set by user or refers to image set according to operating system. 7.315.8. lease Reference to the storage domain this virtual machine/template lease reside on. A virtual machine running with a lease requires checking while running that the lease is not taken by another host, preventing another instance of this virtual machine from running on another host. This provides protection against split-brain in highly available virtual machines. A template can also have a storage domain defined for a lease in order to have the virtual machines created from this template to be preconfigured with this storage domain as the location of the leases. 7.315.9. memory The virtual machine's memory, in bytes. For example, to update a virtual machine to contain 1 Gibibyte (GiB) of memory, send the following request: With the following request body: <vm> <memory>1073741824</memory> </vm> Memory hot plug is supported from Red Hat Virtualization 3.6 onwards. You can use the example above to increase memory while the virtual machine is in state up . The size increment must be dividable by the value of the HotPlugMemoryBlockSizeMb configuration value (256 MiB by default). If the memory size increment is not dividable by this value, the memory size change is only stored to run configuration. Each successful memory hot plug operation creates one or two new memory devices. Memory hot unplug is supported since Red Hat Virtualization 4.2 onwards. Memory hot unplug can only be performed when the virtual machine is in state up . Only previously hot plugged memory devices can be removed by the hot unplug operation. The requested memory decrement is rounded down to match sizes of a combination of previously hot plugged memory devices. The requested memory value is stored to run configuration without rounding. Note Memory in the example is converted to bytes using the following formula: 1 GiB = 2 30 bytes = 1073741824 bytes. Note Red Hat Virtualization Manager internally rounds values down to whole MiBs (1MiB = 2 20 bytes) 7.315.10. migration Reference to configuration of migration of a running virtual machine to another host. Note API for querying migration policy by ID returned by this method is not implemented yet. Use /ovirt-engine/api/options/MigrationPolicies to get a list of all migration policies with their IDs. 7.315.11. migration_downtime Maximum time the virtual machine can be non responsive during its live migration to another host in ms. Set either explicitly for the virtual machine or by engine-config -s DefaultMaximumMigrationDowntime=[value] 7.315.12. origin The origin of this virtual machine. Possible values: ovirt rhev vmware xen external hosted_engine managed_hosted_engine kvm physical_machine hyperv 7.315.13. placement_policy The configuration of the virtual machine's placement policy. This configuration can be updated to pin a virtual machine to one or more hosts. Note Virtual machines that are pinned to multiple hosts cannot be live migrated, but in the event of a host failure, any virtual machine configured to be highly available is automatically restarted on one of the other hosts to which the virtual machine is pinned. For example, to pin a virtual machine to two hosts, send the following request: With a request body like this: <vm> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <placement_policy> <hosts> <host> <name>Host1</name> </host> <host> <name>Host2</name> </host> </hosts> <affinity>pinned</affinity> </placement_policy> </vm> 7.315.14. small_icon Virtual machine's small icon. Either set by user or refers to image set according to operating system. 7.315.15. sso Reference to the Single Sign On configuration this virtual machine is configured for. The user can be automatically signed in the virtual machine's operating system when console is opened. 7.315.16. tpm_enabled If true , a TPM device is added to the virtual machine. By default the value is false . This property is only visible when fetching if "All-Content=true" header is set. Table 7.410. Links summary Name Type Summary cluster Cluster Reference to cluster the virtual machine belongs to. cpu_profile CpuProfile Reference to CPU profile used by this virtual machine. quota Quota Reference to quota configuration set for this virtual machine. storage_domain StorageDomain Reference to storage domain the virtual machine belongs to. 7.316. VmDeviceType enum Table 7.411. Values summary Name Summary cdrom floppy 7.317. VmMediatedDevice struct VM mediated device is a fake device specifying properties of vGPU mediated devices. It is not an actual device, it just serves as a specification how to configure a part of a host device. Table 7.412. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. name String A human-readable name in plain text. spec_params Property[ ] Properties of the device. Table 7.413. Links summary Name Type Summary instance_type InstanceType Optionally references to an instance type the device is used by. template Template Optionally references to a template the device is used by. vm Vm Do not use this element, use vms instead. vms Vm[ ] References to the virtual machines that are using this device. 7.317.1. vms References to the virtual machines that are using this device. A device may be used by several virtual machines; for example, a shared disk my be used simultaneously by two or more virtual machines. 7.318. VmPlacementPolicy struct Table 7.414. Attributes summary Name Type Summary affinity VmAffinity Table 7.415. Links summary Name Type Summary hosts Host[ ] 7.319. VmPool struct Type representing a virtual machines pool. Table 7.416. Attributes summary Name Type Summary auto_storage_select Boolean Indicates if the pool should automatically distribute the disks of the virtual machines across the multiple storage domains where the template is copied. comment String Free text containing comments about this object. description String A human-readable description in plain text. display Display The display settings configured for virtual machines in the pool. id String A unique identifier. max_user_vms Integer The maximum number of virtual machines in the pool that could be assigned to a particular user. name String A human-readable name in plain text. prestarted_vms Integer The system attempts to prestart the specified number of virtual machines from the pool. rng_device RngDevice The random number generator device configured for virtual machines in the pool. size Integer The number of virtual machines in the pool. soundcard_enabled Boolean Indicates if sound card should be configured for virtual machines in the pool. stateful Boolean Virtual machine pool's stateful flag. tpm_enabled Boolean If true , a TPM device is added to the virtual machine. type VmPoolType The deallocation policy of virtual machines in the pool. use_latest_template_version Boolean Indicates if virtual machines in the pool are updated to newer versions of the template the pool is based on. 7.319.1. auto_storage_select Indicates if the pool should automatically distribute the disks of the virtual machines across the multiple storage domains where the template is copied. When the template used by the pool is present in multiple storage domains, the disks of the virtual machines of the pool will be created in one of those storage domains. By default, or when the value of this attribute is false , that storage domain is selected when the pool is created, and all virtual machines will use the same. If this attribute is true , then, when a virtual machine is added to the pool, the storage domain that has more free space is selected. 7.319.2. display The display settings configured for virtual machines in the pool. Warning Please note that this attribute is not working and is now deprecated. Please use Vm.display instead. 7.319.3. prestarted_vms The system attempts to prestart the specified number of virtual machines from the pool. These virtual machines are started without being attached to any user. That way, users can acquire virtual machines from the pool faster. 7.319.4. stateful Virtual machine pool's stateful flag. Virtual machines from a stateful virtual machine pool are always started in stateful mode (stateless snapshot is not created). The state of the virtual machine is preserved even when the virtual machine is passed to a different user. 7.319.5. tpm_enabled If true , a TPM device is added to the virtual machine. By default the value is false . This property is only visible when fetching if "All-Content=true" header is set. Table 7.417. Links summary Name Type Summary cluster Cluster Reference to the cluster the pool resides in. instance_type InstanceType Reference to the instance type on which this pool is based. permissions Permission[ ] Permissions set for this virtual machine pool. template Template Reference to the template the pool is based on. vm Vm Reference to an arbitrary virtual machine that is part of the pool. 7.319.6. instance_type Reference to the instance type on which this pool is based. It can be set only on pool creation and cannot be edited. 7.319.7. vm Reference to an arbitrary virtual machine that is part of the pool. Note that this virtual machine may not be based to the latest version of the pool's template. 7.320. VmPoolType enum Type representing the deallocation policy of virtual machines in a virtual machines pool. Table 7.418. Values summary Name Summary automatic This policy indicates that virtual machines in the pool are automcatically deallocated by the system. manual This policy indicates that virtual machines in the pool are deallocated manually by the administrator. 7.320.1. automatic This policy indicates that virtual machines in the pool are automcatically deallocated by the system. With this policy, when a virtual machine that is part of the pool and is assigned to a user is shut-down, it is detached from the user, its state is restored to the pool's default state, and the virtual machine returns to pool (i.e., the virtual machine can then be assigned to another user). 7.320.2. manual This policy indicates that virtual machines in the pool are deallocated manually by the administrator. With this policy, a virtual machine that is part of the pool remains assigned to its user and preserves its state on shut-down. In order to return the virtual machine back to the pool, the administrator needs to deallocate it explicitly by removing the user's permissions on that virtual machine. 7.321. VmStatus enum Type representing a status of a virtual machine. Table 7.419. Values summary Name Summary down This status indicates that the virtual machine process is not running. image_locked This status indicates that the virtual machine process is not running and there is some operation on the disks of the virtual machine that prevents it from being started. migrating This status indicates that the virtual machine process is running and the virtual machine is being migrated from one host to another. not_responding This status indicates that the hypervisor detected that the virtual machine is not responding. paused This status indicates that the virtual machine process is running and the virtual machine is paused. powering_down This status indicates that the virtual machine process is running and it is about to stop running. powering_up This status indicates that the virtual machine process is running and the guest operating system is being loaded. reboot_in_progress This status indicates that the virtual machine process is running and the guest operating system is being rebooted. restoring_state This status indicates that the virtual machine process is about to run and the virtual machine is going to awake from hibernation. saving_state This status indicates that the virtual machine process is running and the virtual machine is being hibernated. suspended This status indicates that the virtual machine process is not running and a running state of the virtual machine was saved. unassigned This status is set when an invalid status is received. unknown This status indicates that the system failed to determine the status of the virtual machine. up This status indicates that the virtual machine process is running and the guest operating system is loaded. wait_for_launch This status indicates that the virtual machine process is about to run. 7.321.1. paused This status indicates that the virtual machine process is running and the virtual machine is paused. This may happen in two cases: when running a virtual machine is paused mode and when the virtual machine is being automatically paused due to an error. 7.321.2. powering_up This status indicates that the virtual machine process is running and the guest operating system is being loaded. Note that if no guest-agent is installed, this status is set for a predefined period of time, that is by default 60 seconds, when running a virtual machine. 7.321.3. restoring_state This status indicates that the virtual machine process is about to run and the virtual machine is going to awake from hibernation. In this status, the running state of the virtual machine is being restored. 7.321.4. saving_state This status indicates that the virtual machine process is running and the virtual machine is being hibernated. In this status, the running state of the virtual machine is being saved. Note that this status does not mean that the guest operating system is being hibernated. 7.321.5. suspended This status indicates that the virtual machine process is not running and a running state of the virtual machine was saved. This status is similar to Down, but when the VM is started in this status its saved running state is restored instead of being booted using the normal procedue. 7.321.6. unknown This status indicates that the system failed to determine the status of the virtual machine. The virtual machine process may be running or not running in this status. For instance, when host becomes non-responsive the virtual machines that ran on it are set with this status. 7.321.7. up This status indicates that the virtual machine process is running and the guest operating system is loaded. Note that if no guest-agent is installed, this status is set after a predefined period of time, that is by default 60 seconds, when running a virtual machine. 7.321.8. wait_for_launch This status indicates that the virtual machine process is about to run. This status is set when a request to run a virtual machine arrives to the host. It is possible that the virtual machine process will fail to run. 7.322. VmStorageErrorResumeBehaviour enum If the storage, on which this virtual machine has some disks gets unresponsive, the virtual machine gets paused. This are the possible options, what should happen with the virtual machine in the moment the storage gets available again. Table 7.420. Values summary Name Summary auto_resume The virtual machine gets resumed automatically in the moment the storage is available again. kill The virtual machine will be killed after a timeout (configurable on the hypervisor). leave_paused Do nothing with the virtual machine. 7.322.1. auto_resume The virtual machine gets resumed automatically in the moment the storage is available again. This is the only behavior available before 4.2. 7.322.2. kill The virtual machine will be killed after a timeout (configurable on the hypervisor). This is the only option supported for highly available virtual machines with leases. The reason is that the highly available virtual machine is restarted using the infrastructure and any kind of resume risks split brains. 7.322.3. leave_paused Do nothing with the virtual machine. Useful if there is a custom failover implemented and the user does not want the virtual machine to get resumed. 7.323. VmSummary struct Type containing information related to virtual machines on a particular host. Table 7.421. Attributes summary Name Type Summary active Integer The number of virtual machines active on the host. migrating Integer The number of virtual machines migrating to or from the host. total Integer The number of virtual machines present on the host. 7.324. VmType enum Type representing what the virtual machine is optimized for. Table 7.422. Values summary Name Summary desktop The virtual machine is intended to be used as a desktop. high_performance The virtual machine is intended to be used as a high performance virtual machine. server The virtual machine is intended to be used as a server. 7.324.1. desktop The virtual machine is intended to be used as a desktop. Currently, its implication is that a sound device will automatically be added to the virtual machine. 7.324.2. high_performance The virtual machine is intended to be used as a high performance virtual machine. Currently, its implication is that the virtual machine configuration will automatically be set for running with the highest possible performance, and with performance metrics as close to bare metal as possible. Some of the recommended configuration settings for the highest possible performance cannot be set automatically; manually setting them before running the virtual machine is recommended. The following configuration changes are set automatically: Enable headless mode. Enable serial console. Enable pass-through host CPU. Enable I/O threads. Enable I/O threads pinning and set the pinning topology. Enable the paravirtualized random number generator PCI (virtio-rng) device. Disable all USB devices. Disable the soundcard device. Disable the smartcard device. Disable the memory balloon device. Disable the watchdog device. Disable migration. Disable high availability. The following recommended configuration changes have to be set manually by the user: Enable CPU pinning topology. Enable non-uniform memory access (NUMA) pinning topology. Enable and set huge pages configuration. Disable kernel same-page merging (KSM). 7.324.3. server The virtual machine is intended to be used as a server. Currently, its implication is that a sound device will not automatically be added to the virtual machine. 7.325. VnicPassThrough struct Table 7.423. Attributes summary Name Type Summary mode VnicPassThroughMode Defines whether the vNIC will be implemented as a virtual device, or as a pass-through to a host device. 7.326. VnicPassThroughMode enum Describes whether the vNIC is to be implemented as a pass-through device or a virtual one. Table 7.424. Values summary Name Summary disabled To be implemented as a virtual device. enabled To be implemented as a pass-through device. 7.327. VnicProfile struct A vNIC profile is a collection of settings that can be applied to individual NIC . Table 7.425. Attributes summary Name Type Summary comment String Free text containing comments about this object. custom_properties CustomProperty[ ] Custom properties applied to the vNIC profile. description String A human-readable description in plain text. id String A unique identifier. migratable Boolean Marks whether pass_through NIC is migratable or not. name String A human-readable name in plain text. pass_through VnicPassThrough Enables passthrough to an SR-IOV-enabled host NIC . port_mirroring Boolean Enables port mirroring. 7.327.1. migratable Marks whether pass_through NIC is migratable or not. If pass_through.mode is set to disabled this option has no meaning, and it will be considered to be true . If you omit this option from a request, by default, this will be set to true . When migrating a virtual machine, this virtual machine will be migrated only if all pass_through NICs are flagged as migratable . 7.327.2. pass_through Enables passthrough to an SR-IOV-enabled host NIC . A vNIC profile enables a NIC to be directly connected to a virtual function (VF) of an SR-IOV-enabled host NIC, if passthrough is enabled. The NIC will then bypass the software network virtualization and connect directly to the VF for direct device assignment. Passthrough cannot be enabled if the vNIC profile is already attached to a NIC. If a vNIC profile has passthrough enabled, qos and port_mirroring are disabled for the vNIC profile. 7.327.3. port_mirroring Enables port mirroring. Port mirroring copies layer 3 network traffic on a given logical network and host to a NIC on a virtual machine . This virtual machine can be used for network debugging and tuning, intrusion detection, and monitoring the behavior of other virtual machines on the same host and logical network. The only traffic copied is internal to one logical network on one host. There is no increase in traffic on the network external to the host; however a virtual machine with port mirroring enabled uses more host CPU and RAM than other virtual machines. Port mirroring has the following limitations: Hot linking a NIC with a vNIC profile that has port mirroring enabled is not supported. Port mirroring cannot be altered when the vNIC profile is attached to a virtual machine. Given the above limitations, it is recommended that you enable port mirroring on an additional, dedicated vNIC profile. Important Enabling port mirroring reduces the privacy of other network users. Table 7.426. Links summary Name Type Summary failover VnicProfile Failover vNIC profile for SR-IOV migration without downtime network Network Reference to the network that the vNIC profile is applied to. network_filter NetworkFilter Reference to the top-level network filter that applies to the NICs that use this profile. permissions Permission[ ] Permissions to allow usage of the vNIC profile. qos Qos Reference to the quality of service attributes to apply to the vNIC profile. 7.327.4. network_filter Reference to the top-level network filter that applies to the NICs that use this profile. Network filters enhance the ability to manage the network packets traffic to and from virtual machines. The network filter may either contain a reference to other filters, rules for traffic filtering, or a combination of both. 7.327.5. qos Reference to the quality of service attributes to apply to the vNIC profile. Quality of Service attributes regulate inbound and outbound network traffic of the NIC. 7.328. VnicProfileMapping struct Deprecated type that maps an external virtual NIC profile to one that exists in the Red Hat Virtualization Manager. If, for example, the desired virtual NIC profile's mapping includes the following two lines: Source network name Source network profile name Target virtual NIC profile ID red gold 738dd914-8ec8-4a8b-8628-34672a5d449b blue silver 892a12ec-2028-4451-80aa-ff3bf55d6bac The following form is deprecated since 4.2.1 and will be removed in the future: <vnic_profile_mappings> <vnic_profile_mapping> <source_network_name>red</source_network_name> <source_network_profile_name>gold</source_network_profile_name> <target_vnic_profile id="738dd914-8ec8-4a8b-8628-34672a5d449b"/> </vnic_profile_mapping> <vnic_profile_mapping> <source_network_name>blue</source_network_name> <source_network_profile_name>silver</source_network_profile_name> <target_vnic_profile id="892a12ec-2028-4451-80aa-ff3bf55d6bac"/> </vnic_profile_mapping> </vnic_profile_mappings> Table 7.427. Attributes summary Name Type Summary source_network_name String Deprecated attribute describing the name of the external network. source_network_profile_name String Deprecated attribute describing the name of the external network profile. 7.328.1. source_network_name Deprecated attribute describing the name of the external network. Warning Please note that this attribute has been deprecated since version 4.2.1 of the engine, and preserved only for backward compatibility. It will be removed in the future. 7.328.2. source_network_profile_name Deprecated attribute describing the name of the external network profile. Warning Please note that this attribute has been deprecated since version 4.2.1 of the engine, and preserved only for backward compatibility. It will be removed in the future. Table 7.428. Links summary Name Type Summary target_vnic_profile VnicProfile Deprecated attribute describing an existing virtual NIC profile. 7.328.3. target_vnic_profile Deprecated attribute describing an existing virtual NIC profile. Warning Please note that this attribute has been deprecated since version 4.2.1 of the engine, and preserved only for backward compatibility. It will be removed in the future. 7.329. VolumeGroup struct Table 7.429. Attributes summary Name Type Summary id String logical_units LogicalUnit[ ] name String 7.330. Watchdog struct This type represents a watchdog configuration. Table 7.430. Attributes summary Name Type Summary action WatchdogAction Watchdog action to be performed when watchdog is triggered. comment String Free text containing comments about this object. description String A human-readable description in plain text. id String A unique identifier. model WatchdogModel Model of watchdog device. name String A human-readable name in plain text. 7.330.1. model Model of watchdog device. Currently supported only I6300ESB. Table 7.431. Links summary Name Type Summary instance_type InstanceType Optionally references to an instance type the device is used by. template Template Optionally references to a template the device is used by. vm Vm Do not use this element, use vms instead. vms Vm[ ] References to the virtual machines that are using this device. 7.330.2. vms References to the virtual machines that are using this device. A device may be used by several virtual machines; for example, a shared disk my be used simultaneously by two or more virtual machines. 7.331. WatchdogAction enum This type describes available watchdog actions. Table 7.432. Values summary Name Summary dump Virtual machine process will get core dumped to the default path on the host. none No action will be performed when watchdog action is triggered. pause Virtual machine will be paused when watchdog action is triggered. poweroff Virtual machine will be powered off when watchdog action is triggered. reset Virtual machine will be rebooted when watchdog action is triggered. 7.331.1. none No action will be performed when watchdog action is triggered. However log message will still be generated. 7.332. WatchdogModel enum This type represents the watchdog model. Table 7.433. Values summary Name Summary diag288 The watchdog model for S390X machines. i6300esb PCI based watchdog model. 7.332.1. diag288 The watchdog model for S390X machines. S390X has an integrated watchdog facility that is controlled via the DIAG288 instruction. Use this model for S390X virtual machines. 7.332.2. i6300esb PCI based watchdog model. Use the I6300ESB watchdog for x86_64 and PPC64 virtual machines. 7.333. Weight struct Table 7.434. Attributes summary Name Type Summary comment String Free text containing comments about this object. description String A human-readable description in plain text. factor Integer id String A unique identifier. name String A human-readable name in plain text. Table 7.435. Links summary Name Type Summary scheduling_policy SchedulingPolicy scheduling_policy_unit SchedulingPolicyUnit
[ "GET /ovirt-engine/api", "<api> <link rel=\"hosts\" href=\"/ovirt-engine/api/hosts\"/> <link rel=\"vms\" href=\"/ovirt-engine/api/vms\"/> <product_info> <name>oVirt Engine</name> <vendor>ovirt.org</vendor> <version> <build>0</build> <full_version>4.1.0_master</full_version> <major>4</major> <minor>1</minor> <revision>0</revision> </version> </product_info> <special_objects> <link rel=\"templates/blank\" href=\"...\"/> <link rel=\"tags/root\" href=\"...\"/> </special_objects> <summary> <vms> <total>10</total> <active>3</active> </vms> <hosts> <total>2</total> <active>2</active> </hosts> <users> <total>8</total> <active>2</active> </users> <storage_domains> <total>2</total> <active>2</active> </storage_domains> </summary> <time>2016-12-12T12:22:25.866+01:00</time> </api>", "GET /ovirt-engine/api/vms/123/applications/456", "<application href=\"/ovirt-engine/api/vms/123/applications/456\" id=\"456\"> <name>application-test-1.0.0-0.el7</name> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </application>", "GET /ovirt-engine/api/hosts/123/nics/321", "<host_nic href=\"/ovirt-engine/api/hosts/123/nics/321\" id=\"321\"> <bonding> <slaves> <host_nic href=\"/ovirt-engine/api/hosts/123/nics/456\" id=\"456\" /> </slaves> <active_slave href=\"/ovirt-engine/api/hosts/123/nics/456\" id=\"456\" /> </bonding> </host_nic>", "{ \"cluster\" : [ { \"ballooning_enabled\" : \"false\", \"cpu\" : { \"architecture\" : \"x86_64\", \"type\" : \"Intel SandyBridge Family\" }, \"custom_scheduling_policy_properties\" : { \"property\" : [ { \"name\" : \"HighUtilization\", \"value\" : \"80\" }, { \"name\" : \"CpuOverCommitDurationMinutes\", \"value\" : \"2\" } ] }, \"error_handling\" : { \"on_error\" : \"migrate\" }, \"fencing_policy\" : { \"enabled\" : \"true\", \"skip_if_connectivity_broken\" : { \"enabled\" : \"false\", \"threshold\" : \"50\" }, \"skip_if_gluster_bricks_up\" : \"false\", \"skip_if_gluster_quorum_not_met\" : \"false\", \"skip_if_sd_active\" : { \"enabled\" : \"false\" } }, \"gluster_service\" : \"false\", \"firewall_type\" : \"iptables\", \"ha_reservation\" : \"false\", \"ksm\" : { \"enabled\" : \"true\", \"merge_across_nodes\" : \"true\" }, \"memory_policy\" : { \"over_commit\" : { \"percent\" : \"100\" }, \"transparent_hugepages\" : { \"enabled\" : \"true\" } }, \"migration\" : { \"auto_converge\" : \"inherit\", \"bandwidth\" : { \"assignment_method\" : \"auto\" }, \"compressed\" : \"inherit\", \"policy\" : { \"id\" : \"00000000-0000-0000-0000-000000000000\" } }, \"required_rng_sources\" : { \"required_rng_source\" : [ \"random\" ] }, \"switch_type\" : \"legacy\", \"threads_as_cores\" : \"false\", \"trusted_service\" : \"false\", \"tunnel_migration\" : \"false\", \"version\" : { \"major\" : \"4\", \"minor\" : \"1\" }, \"virt_service\" : \"true\", \"data_center\" : { \"href\" : \"/ovirt-engine/api/datacenters/123\", \"id\" : \"123\" }, \"mac_pool\" : { \"href\" : \"/ovirt-engine/api/macpools/456\", \"id\" : \"456\" }, \"scheduling_policy\" : { \"href\" : \"/ovirt-engine/api/schedulingpolicies/789\", \"id\" : \"789\" }, \"actions\" : { \"link\" : [ { \"href\" : \"/ovirt-engine/api/clusters/234/resetemulatedmachine\", \"rel\" : \"resetemulatedmachine\" } ] }, \"name\" : \"Default\", \"description\" : \"The default server cluster\", \"href\" : \"/ovirt-engine/api/clusters/234\", \"id\" : \"234\", \"link\" : [ { \"href\" : \"/ovirt-engine/api/clusters/234/permissions\", \"rel\" : \"permissions\" }, { \"href\" : \"/ovirt-engine/api/clusters/234/cpuprofiles\", \"rel\" : \"cpuprofiles\" }, { \"href\" : \"/ovirt-engine/api/clusters/234/networkfilters\", \"rel\" : \"networkfilters\" }, { \"href\" : \"/ovirt-engine/api/clusters/234/networks\", \"rel\" : \"networks\" }, { \"href\" : \"/ovirt-engine/api/clusters/234/affinitygroups\", \"rel\" : \"affinitygroups\" }, { \"href\" : \"/ovirt-engine/api/clusters/234/glusterhooks\", \"rel\" : \"glusterhooks\" }, { \"href\" : \"/ovirt-engine/api/clusters/234/glustervolumes\", \"rel\" : \"glustervolumes\" }, { \"href\" : \"/ovirt-engine/api/clusters/234/enabledfeatures\", \"rel\" : \"enabledfeatures\" }, { \"href\" : \"/ovirt-engine/api/clusters/234/externalnetworkproviders\", \"rel\" : \"externalnetworkproviders\" } ] } ] }", "PUT /ovirt-engine/api/clusters/123", "<cluster> <custom_scheduling_policy_properties> <property> <name>HighUtilization</name> <value>70</value> </property> </custom_scheduling_policy_properties> </cluster>", "PUT /ovirt-engine/api/cluster/123", "<cluster> <fencing_policy> <enabled>true</enabled> <skip_if_sd_active> <enabled>false</enabled> </skip_if_sd_active> <skip_if_connectivity_broken> <enabled>false</enabled> <threshold>50</threshold> </skip_if_connectivity_broken> </fencing_policy> </cluster>", "GET /ovirt-engine/api/clusters/123", "<cluster> <version> <major>4</major> <minor>0</minor> </version> </cluster>", "PUT /ovirt-engine/api/clusters/123", "<cluster> <version> <major>4</major> <minor>1</minor> </version> </cluster>", "<?xml version='1.0' encoding='UTF-8'?> <ovf:Envelope xmlns:ovf=\"http://schemas.dmtf.org/ovf/envelope/1/\" xmlns:rasd=\"http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData\" xmlns:vssd=\"http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" ovf:version=\"3.5.0.0\"> <References/> <Section xsi:type=\"ovf:NetworkSection_Type\"> <Info>List of networks</Info> <Network ovf:name=\"Network 1\"/> </Section> <Section xsi:type=\"ovf:DiskSection_Type\"> <Info>List of Virtual Disks</Info> </Section> <Content ovf:id=\"out\" xsi:type=\"ovf:VirtualSystem_Type\"> <CreationDate>2014/12/03 04:25:45</CreationDate> <ExportDate>2015/02/09 14:12:24</ExportDate> <DeleteProtected>false</DeleteProtected> <SsoMethod>guest_agent</SsoMethod> <IsSmartcardEnabled>false</IsSmartcardEnabled> <TimeZone>Etc/GMT</TimeZone> <default_boot_sequence>0</default_boot_sequence> <Generation>1</Generation> <VmType>1</VmType> <MinAllocatedMem>1024</MinAllocatedMem> <IsStateless>false</IsStateless> <IsRunAndPause>false</IsRunAndPause> <AutoStartup>false</AutoStartup> <Priority>1</Priority> <CreatedByUserId>fdfc627c-d875-11e0-90f0-83df133b58cc</CreatedByUserId> <IsBootMenuEnabled>false</IsBootMenuEnabled> <IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled> <IsSpiceCopyPasteEnabled>true</IsSpiceCopyPasteEnabled> <Name>VM_export</Name> <TemplateId>00000000-0000-0000-0000-000000000000</TemplateId> <TemplateName>Blank</TemplateName> <IsInitilized>false</IsInitilized> <Origin>3</Origin> <DefaultDisplayType>1</DefaultDisplayType> <TrustedService>false</TrustedService> <OriginalTemplateId>00000000-0000-0000-0000-000000000000</OriginalTemplateId> <OriginalTemplateName>Blank</OriginalTemplateName> <UseLatestVersion>false</UseLatestVersion> <Section ovf:id=\"70b4d9a7-4f73-4def-89ca-24fc5f60e01a\" ovf:required=\"false\" xsi:type=\"ovf:OperatingSystemSection_Type\"> <Info>Guest Operating System</Info> <Description>other</Description> </Section> <Section xsi:type=\"ovf:VirtualHardwareSection_Type\"> <Info>1 CPU, 1024 Memory</Info> <System> <vssd:VirtualSystemType>ENGINE 3.5.0.0</vssd:VirtualSystemType> </System> <Item> <rasd:Caption>1 virtual cpu</rasd:Caption> <rasd:Description>Number of virtual CPU</rasd:Description> <rasd:InstanceId>1</rasd:InstanceId> <rasd:ResourceType>3</rasd:ResourceType> <rasd:num_of_sockets>1</rasd:num_of_sockets> <rasd:cpu_per_socket>1</rasd:cpu_per_socket> </Item> <Item> <rasd:Caption>1024 MB of memory</rasd:Caption> <rasd:Description>Memory Size</rasd:Description> <rasd:InstanceId>2</rasd:InstanceId> <rasd:ResourceType>4</rasd:ResourceType> <rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits> <rasd:VirtualQuantity>1024</rasd:VirtualQuantity> </Item> <Item> <rasd:Caption>USB Controller</rasd:Caption> <rasd:InstanceId>3</rasd:InstanceId> <rasd:ResourceType>23</rasd:ResourceType> <rasd:UsbPolicy>DISABLED</rasd:UsbPolicy> </Item> </Section> </Content> </ovf:Envelope>", "GET /ovirt-engine/api/datacenters/123", "<data_center> <version> <major>4</major> <minor>0</minor> </version> </data_center>", "PUT /ovirt-engine/api/datacenters/123", "<data_center> <version> <major>4</major> <minor>1</minor> </version> </data_center>", "POST /ovirt-engine/api/vms/123/diskattachments", "<disk_attachment> <read_only>true</read_only> </disk_attachment>", "<statistics> <statistic href=\"/ovirt-engine/api/disks/123/statistics/456\" id=\"456\"> <name>data.current.read</name> <description>Read data rate</description> <kind>gauge</kind> <type>decimal</type> <unit>bytes_per_second</unit> <values> <value> <datum>1052</datum> </value> </values> <disk href=\"/ovirt-engine/api/disks/123\" id=\"123\"/> </statistic> </statistics>", "GET /ovirt-engine/api/disks/123/statistics", "<disk_attachment> <logical_name>/dev/vda</logical_name> </disk_attachment>", "<disk_attachment> <read_only>true</read_only> </disk_attachment>", "POST /ovirt-engine/api/vms/123/diskattachments", "<disk_attachment> <read_only>true</read_only> </disk_attachment>", "<statistics> <statistic href=\"/ovirt-engine/api/disks/123/statistics/456\" id=\"456\"> <name>data.current.read</name> <description>Read data rate</description> <kind>gauge</kind> <type>decimal</type> <unit>bytes_per_second</unit> <values> <value> <datum>1052</datum> </value> </values> <disk href=\"/ovirt-engine/api/disks/123\" id=\"123\"/> </statistic> </statistics>", "GET /ovirt-engine/api/disks/123/statistics", "ova:///mnt/ova/ova_file.ova", "vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1", "GET /ovirt-engine/api/vms/123", "<vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"> <guest_operating_system> <architecture>x86_64</architecture> <codename>Maipo</codename> <distribution>Red Hat Enterprise Linux Server</distribution> <family>Linux</family> <kernel> <version> <build>0</build> <full_version>3.10.0-514.10.2.el7.x86_64</full_version> <major>3</major> <minor>10</minor> <revision>514</revision> </version> </kernel> <version> <full_version>7.3</full_version> <major>7</major> <minor>3</minor> </version> </guest_operating_system> </vm>", "GET /ovirt-engine/api/hosts/123", "<host href=\"/ovirt-engine/api/hosts/123\" id=\"123\"> <hardware_information> <family>Red Hat Enterprise Linux</family> <manufacturer>Red Hat</manufacturer> <product_name>RHEV Hypervisor</product_name> <serial_number>01234567-89AB-CDEF-0123-456789ABCDEF</serial_number> <supported_rng_sources> <supported_rng_source>random</supported_rng_source> </supported_rng_sources> <uuid>12345678-9ABC-DEF0-1234-56789ABCDEF0</uuid> <version>1.2-34.5.el7ev</version> </hardware_information> </application>", "PUT /ovirt-engine/api/hosts/123", "<host> <ksm> <enabled>true</enabled> </ksm> </host>", "PUT /ovirt-engine/api/hosts/123", "<host> <transparent_hugepages> <enabled>true</enabled> </transparent_hugepages> </host>", "GET /ovirt-engine/api/hosts/123", "<host> <version> <build>999</build> <full_version>vdsm-4.18.999-419.gitcf06367.el7</full_version> <major>4</major> <minor>18</minor> <revision>0</revision> </version> </host>", "GET /ovirt-engine/api/hosts/123/cpuunits", "<host_cpu_units> <host_cpu_unit> <core_id>0</core_id> <cpu_id>0</cpu_id> <socket_id>0</socket_id> <vms> <vm href=\"/ovirt-engine/api/vms/def\" id=\"def\" /> </vms> </host_cpu_unit> <host_cpu_unit> <core_id>0</core_id> <cpu_id>1</cpu_id> <socket_id>1</socket_id> <runs_vdsm>true</runs_vdsm> </host_cpu_unit> <host_cpu_unit> <core_id>0</core_id> <cpu_id>2</cpu_id> <socket_id>2</socket_id> </host_cpu_unit> </host_cpu_units>", "GET /ovirt-engine/api/hosts/123/katelloerrata", "<katello_errata> <katello_erratum href=\"/ovirt-engine/api/katelloerrata/456\" id=\"456\"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> </packages> </katello_erratum> </katello_errata>", "<statistics> <statistic href=\"/ovirt-engine/api/hosts/123/statistics/456\" id=\"456\"> <name>memory.total</name> <description>Total memory</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>25165824000</datum> </value> </values> <host href=\"/ovirt-engine/api/hosts/123\" id=\"123\"/> </statistic> </statistics>", "<host_nic href=\"/ovirt-engine/api/hosts/123/nics/456\" id=\"456\"> <name>eth0</name> <boot_protocol>static</boot_protocol> <bridged>true</bridged> <custom_configuration>true</custom_configuration> <ip> <address>192.168.122.39</address> <gateway>192.168.122.1</gateway> <netmask>255.255.255.0</netmask> <version>v4</version> </ip> <ipv6> <gateway>::</gateway> <version>v6</version> </ipv6> <ipv6_boot_protocol>none</ipv6_boot_protocol> <mac> <address>52:54:00:0c:79:1d</address> </mac> <mtu>1500</mtu> <status>up</status> </host_nic>", "<host_nic href=\"/ovirt-engine/api/hosts/123/nics/456\" id=\"456\"> <name>bond0</name> <mac address=\"00:00:00:00:00:00\"/> <ip> <address>192.168.122.39</address> <gateway>192.168.122.1</gateway> <netmask>255.255.255.0</netmask> <version>v4</version> </ip> <boot_protocol>dhcp</boot_protocol> <bonding> <options> <option> <name>mode</name> <value>4</value> <type>Dynamic link aggregation (802.3ad)</type> </option> <option> <name>miimon</name> <value>100</value> </option> </options> <slaves> <host_nic id=\"123\"/> <host_nic id=\"456\"/> </slaves> </bonding> <mtu>1500</mtu> <bridged>true</bridged> <custom_configuration>false</custom_configuration> </host_nic>", "POST /ovirt-engine/api/storagedomains/", "<storage_domain> <name>kamniraio-cinder</name> <type>managed_block_storage</type> <storage> <type>managed_block_storage</type> <driver_options> <property> <name>san_ip</name> <value>192.168.1.1</value> </property> <property> <name>san_login</name> <value>username</value> </property> <property> <name>san_password</name> <value>password</value> </property> <property> <name>use_multipath_for_image_xfer</name> <value>true</value> </property> <property> <name>volume_driver</name> <value>cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver</value> </property> </driver_options> </storage> <host> <name>host</name> </host> </storage_domain>", "POST /ovirt-engine/api/storagedomains/", "<storage_domain> <name>kamniraio-cinder</name> <type>managed_block_storage</type> <storage> <type>managed_block_storage</type> <driver_options> <property> <name>san_ip</name> <value>192.168.1.1</value> </property> <property> <name>san_login</name> <value>username</value> </property> <property> <name>san_password</name> <value>password</value> </property> <property> <name>use_multipath_for_image_xfer</name> <value>true</value> </property> <property> <name>volume_driver</name> <value>cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver</value> </property> </driver_options> <driver_sensitive_options> <property> <name>username</name> <value>admin</value> </property> <property> <name>password</name> <value>123</value> </property> <property> <name>san_ip</name> <value>192.168.1.1</value> </property> </driver_sensitive_options> </storage> <host> <name>host</name> </host> </storage_domain>", "<image_transfer> <snapshot id=\"2fb24fa2-a5db-446b-b733-4654661cd56d\"/> <direction>download</direction> <format>raw</format> <shallow>true</shallow> </image_transfer>", "<image_transfer> <disk id=\"ff6be46d-ef5d-41d6-835c-4a68e8956b00\"/> <direction>download</direction> <format>raw</format> <shallow>true</shallow> </image_transfer>", "from ovirt_imageio import client client.download( transfer.transfer_url, \"51275e7d-42e9-491f-9d65-b9211c897eac\", backing_file=\"07c0ccac-0845-4665-9097-d0a3b16cf43b\", backing_format=\"qcow2\")", "PUT /ovirt-engine/api/vms/123", "<vm> <cpu> <topology> <sockets>4</sockets> <cores>2</cores> <threads>2</threads> </topology> </cpu> </vm>", "PUT /ovirt-engine/api/vms/123", "<vm> <initialization/> </vm>", "PUT /ovirt-engine/api/vms/123", "<vm> <memory>1073741824</memory> </vm>", "PUT /api/vms/123", "<vm> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <placement_policy> <hosts> <host> <name>Host1</name> </host> <host> <name>Host2</name> </host> </hosts> <affinity>pinned</affinity> </placement_policy> </vm>", "<ip> <address>192.168.0.1</address> </ip>", "<ip> <address>2620:52:0:20f0:4216:7eff:feaa:1b50</address> </ip>", "<link_layer_discovery_protocol_element> <name>Port VLAN Id</name> <oui>32962</oui> <properties> <property> <name>vlan id</name> <value>488</value> </property> <property> <name>vlan name</name> <value>v2-0488-03-0505</value> </property> </properties> <subtype>3</subtype> <type>127</type> </link_layer_discovery_protocol_element>", "<mac_pool href=\"/ovirt-engine/api/macpools/123\" id=\"123\"> <name>Default</name> <description>Default MAC pool</description> <allow_duplicates>false</allow_duplicates> <default_pool>true</default_pool> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:E6</to> </range> </ranges> </mac_pool>", "{ \"network\" : [ { \"data_center\" : { \"href\" : \"/ovirt-engine/api/datacenters/123\", \"id\" : \"123\" }, \"stp\" : \"false\", \"mtu\" : \"0\", \"usages\" : { \"usage\" : [ \"vm\" ] }, \"name\" : \"ovirtmgmt\", \"description\" : \"Management Network\", \"href\" : \"/ovirt-engine/api/networks/456\", \"id\" : \"456\", \"link\" : [ { \"href\" : \"/ovirt-engine/api/networks/456/permissions\", \"rel\" : \"permissions\" }, { \"href\" : \"/ovirt-engine/api/networks/456/vnicprofiles\", \"rel\" : \"vnicprofiles\" }, { \"href\" : \"/ovirt-engine/api/networks/456/labels\", \"rel\" : \"labels\" } ] } ] }", "<network href=\"/ovirt-engine/api/networks/456\" id=\"456\"> <name>ovirtmgmt</name> <description>Management Network</description> <link href=\"/ovirt-engine/api/networks/456/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/networks/456/vnicprofiles\" rel=\"vnicprofiles\"/> <link href=\"/ovirt-engine/api/networks/456/labels\" rel=\"labels\"/> <data_center href=\"/ovirt-engine/api/datacenters/123\" id=\"123\"/> <stp>false</stp> <mtu>0</mtu> <usages> <usage>vm</usage> </usages> </network>", "<network_attachment href=\"/ovirt-engine/api/hosts/123/nics/456/networkattachments/789\" id=\"789\"> <network href=\"/ovirt-engine/api/networks/234\" id=\"234\"/> <host_nic href=\"/ovirt-engine/api/hosts/123/nics/123\" id=\"123\"/> <in_sync>true</in_sync> <ip_address_assignments> <ip_address_assignment> <assignment_method>static</assignment_method> <ip> <address>192.168.122.39</address> <gateway>192.168.122.1</gateway> <netmask>255.255.255.0</netmask> <version>v4</version> </ip> </ip_address_assignment> </ip_address_assignments> <reported_configurations> <reported_configuration> <name>mtu</name> <expected_value>1500</expected_value> <actual_value>1500</actual_value> <in_sync>true</in_sync> </reported_configuration> <reported_configuration> <name>bridged</name> <expected_value>true</expected_value> <actual_value>true</actual_value> <in_sync>true</in_sync> </reported_configuration> </reported_configurations> </network_attachment>", "POST /ovirt-engine/api/hosts/123/nics/456/networkattachments", "<networkattachment> <network id=\"234\"/> </networkattachment>", "POST /ovirt-engine/api/hosts/123/networkattachments", "<network_attachment> <network id=\"234\"/> <host_nic id=\"456\"/> </network_attachment>", "PUT /ovirt-engine/api/hosts/123/nics/456/networkattachments/789", "<network_attachment> <ip_address_assignments> <ip_address_assignment> <assignment_method>static</assignment_method> <ip> <address>7.1.1.1</address> <gateway>7.1.1.2</gateway> <netmask>255.255.255.0</netmask> <version>v4</version> </ip> </ip_address_assignment> </ip_address_assignments> </network_attachment>", "DELETE /ovirt-engine/api/hosts/123/nics/456/networkattachments/789", "<network_attachment> <properties> <property> <name>bridge_opts</name> <value> forward_delay=1500 group_fwd_mask=0x0 multicast_snooping=1 </value> </property> </properties> </network_attachment>", "<network_filter id=\"00000019-0019-0019-0019-00000000026c\"> <name>example-filter</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter>", "<network_filter_parameter id=\"123\"> <name>IP</name> <value>10.0.1.2</value> </network_filter_parameter>", "<nic href=\"/ovirt-engine/api/vms/123/nics/456\" id=\"456\"> <name>nic1</name> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> <interface>virtio</interface> <linked>true</linked> <mac> <address>02:00:00:00:00:00</address> </mac> <plugged>true</plugged> <vnic_profile href=\"/ovirt-engine/api/vnicprofiles/789\" id=\"789\"/> </nic>", "<host_numa_node href=\"/ovirt-engine/api/hosts/0923f1ea/numanodes/007cf1ab\" id=\"007cf1ab\"> <cpu> <cores> <core> <index>0</index> </core> </cores> </cpu> <index>0</index> <memory>65536</memory> <node_distance>40 20 40 10</node_distance> <host href=\"/ovirt-engine/api/hosts/0923f1ea\" id=\"0923f1ea\"/> </host_numa_node>", "<statistics> <statistic href=\"/ovirt-engine/api/hosts/123/numanodes/456/statistics/789\" id=\"789\"> <name>memory.total</name> <description>Total memory</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>25165824000</datum> </value> </values> <host_numa_node href=\"/ovirt-engine/api/hosts/123/numanodes/456\" id=\"456\" /> </statistic> </statistics>", "<package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package>", "<api> <product_info> <name>oVirt Engine</name> <vendor>ovirt.org</vendor> <version> <build>0</build> <full_version>4.1.0_master</full_version> <major>4</major> <minor>1</minor> <revision>0</revision> </version> </product_info> </api>", "<quota href=\"/ovirt-engine/api/datacenters/7044934e/quotas/dcad5ddc\" id=\"dcad5ddc\"> <name>My Quota</name> <description>A quota for my oVirt environment</description> <cluster_hard_limit_pct>0</cluster_hard_limit_pct> <cluster_soft_limit_pct>0</cluster_soft_limit_pct> <data_center href=\"/ovirt-engine/api/datacenters/7044934e\" id=\"7044934e\"/> <storage_hard_limit_pct>0</storage_hard_limit_pct> <storage_soft_limit_pct>0</storage_soft_limit_pct> </quota>", "<action> <registration_configuration> <affinity_group_mappings> <registration_affinity_group_mapping> <from> <name>affinity</name> </from> <to> <name>affinity2</name> </to> </registration_affinity_group_mapping> </affinity_group_mappings> </registration_configuration> </action>", "<action> <registration_configuration> <affinity_label_mappings> <registration_affinity_label_mapping> <from> <name>affinity_label</name> </from> <to> <name>affinity_label2</name> </to> </registration_affinity_label_mapping> </affinity_label_mappings> </registration_configuration> </action>", "<action> <registration_configuration> <cluster_mappings> <registration_cluster_mapping> <from> <name>myoriginalcluster</name> </from> <to> <name>mynewcluster</name> </to> </registration_cluster_mapping> </cluster_mappings> </registration_configuration> </action>", "<action> <registration_configuration> <cluster_mappings> <registration_cluster_mapping> <from> <name>myoriginalcluster</name> </from> <to> <name>mynewcluster</name> </to> </registration_cluster_mapping> </cluster_mappings> <role_mappings> <registration_role_mapping> <from> <name>SuperUser</name> </from> <to> <name>UserVmRunTimeManager</name> </to> </registration_role_mapping> </role_mappings> <domain_mappings> <registration_domain_mapping> <from> <name>redhat</name> </from> <to> <name>internal</name> </to> </registration_domain_mapping> </domain_mappings> <lun_mappings> <registration_lun_mapping> <from id=\"111\"> </from> <to id=\"222\"> <alias>weTestLun</alias> <lun_storage> <type>iscsi</type> <logical_units> <logical_unit id=\"36001405fb1ddb4b91e44078f1fffcfef\"> <address>44.33.11.22</address> <port>3260</port> <portal>1</portal> <target>iqn.2017-11.com.name.redhat:444</target> </logical_unit> </logical_units> </lun_storage> </to> </registration_lun_mapping> </lun_mappings> <affinity_group_mappings> <registration_affinity_group_mapping> <from> <name>affinity</name> </from> <to> <name>affinity2</name> </to> </registration_affinity_group_mapping> </affinity_group_mappings> <affinity_label_mappings> <registration_affinity_label_mapping> <from> <name>affinity_label</name> </from> <to> <name>affinity_label2</name> </to> </registration_affinity_label_mapping> </affinity_label_mappings> <vnic_profile_mappings> <registration_vnic_profile_mapping> <from> <name>gold</name> <network> <name>red</name> </network> </from> <to id=\"738dd914-8ec8-4a8b-8628-34672a5d449b\"/> </registration_vnic_profile_mapping> <registration_vnic_profile_mapping> <from> <name>silver</name> <network> <name>blue</name> </network> </from> <to> <name>copper</name> <network> <name>orange</name> </network> </to> </registration_vnic_profile_mapping> </vnic_profile_mappings> </registration_configuration> </action>", "<action> <registration_configuration> <domain_mappings> <registration_domain_mapping> <from> <name>redhat</name> </from> <to> <name>internal</name> </to> </registration_domain_mapping> </domain_mappings> </registration_configuration> </action>", "<action> <registration_configuration> <lun_mappings> <registration_lun_mapping> <lun_mappings> <registration_lun_mapping> <from id=\"111\"> </from> <to id=\"222\"> <alias>weTestLun</alias> <lun_storage> <type>iscsi</type> <logical_units> <logical_unit id=\"36001405fb1ddb4b91e44078f1fffcfef\"> <address>44.33.11.22</address> <port>3260</port> <portal>1</portal> <target>iqn.2017-11.com.name.redhat:444</target> </logical_unit> </logical_units> </lun_storage> </to> </registration_lun_mapping> </lun_mappings> </registration_configuration> </action>", "<action> <registration_configuration> <role_mappings> <registration_eole_mapping> <from> <name>SuperUser</name> </from> <to> <name>UserVmRunTimeManager</name> </to> </registration_role_mapping> </role_mappings> </registration_configuration> </action>", "<vnic_profile_mappings> <registration_vnic_profile_mapping> <from> <name>gold</name> <network> <name>red</name> </network> </from> <to id=\"738dd914-8ec8-4a8b-8628-34672a5d449b\"/> </registration_vnic_profile_mapping> <registration_vnic_profile_mapping> <from> <name></name> <network> <name></name> </network> </from> <to id=\"892a12ec-2028-4451-80aa-ff3bf55d6bac\"/> </registration_vnic_profile_mapping> <registration_vnic_profile_mapping> <from> <name>silver</name> <network> <name>blue</name> </network> </from> <to> <name>copper</name> <network> <name>orange</name> </network> </to> </registration_vnic_profile_mapping> <registration_vnic_profile_mapping> <from> <name>platinum</name> <network> <name>yellow</name> </network> </from> <to> <name></name> <network> <name></name> </network> </to> </registration_vnic_profile_mapping> <registration_vnic_profile_mapping> <from> <name>bronze</name> <network> <name>green</name> </network> </from> </registration_vnic_profile_mapping> </vnic_profile_mappings>", "<snapshot id=\"456\" href=\"/ovirt-engine/api/vms/123/snapshots/456\"> <actions> <link rel=\"restore\" href=\"/ovirt-engine/api/vms/123/snapshots/456/restore\"/> </actions> <vm id=\"123\" href=\"/ovirt-engine/api/vms/123\"/> <description>Virtual Machine 1 - Snapshot A</description> <type>active</type> <date>2010-08-16T14:24:29</date> <persist_memorystate>false</persist_memorystate> </snapshot>", "PUT /ovirt-engine/api/vms/123", "<vm> <cpu> <topology> <sockets>4</sockets> <cores>2</cores> <threads>2</threads> </topology> </cpu> </vm>", "PUT /ovirt-engine/api/vms/123", "<vm> <initialization/> </vm>", "PUT /ovirt-engine/api/vms/123", "<vm> <memory>1073741824</memory> </vm>", "PUT /api/vms/123", "<vm> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <placement_policy> <hosts> <host> <name>Host1</name> </host> <host> <name>Host2</name> </host> </hosts> <affinity>pinned</affinity> </placement_policy> </vm>", "GET /ovirt-engine/api/vms/123/katelloerrata", "<katello_errata> <katello_erratum href=\"/ovirt-engine/api/katelloerrata/456\" id=\"456\"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> </packages> </katello_erratum> </katello_errata>", "<statistics> <statistic id=\"1234\" href=\"/ovirt-engine/api/hosts/1234/nics/1234/statistics/1234\"> <name>data.current.rx</name> <description>Receive data rate</description> <values type=\"DECIMAL\"> <value> <datum>0</datum> </value> </values> <type>GAUGE</type> <unit>BYTES_PER_SECOND</unit> <host_nic id=\"1234\" href=\"/ovirt-engine/api/hosts/1234/nics/1234\"/> </statistic> </statistics>", "<storage_connection id=\"123\"> <address>mynfs.example.com</address> <type>nfs</type> <path>/exports/mydata</path> </storage_connection>", "<storage_domain href=\"/ovirt-engine/api/storagedomains/123\" id=\"123\"> <name>mydata</name> <description>My data</description> <available>38654705664</available> <committed>1073741824</committed> <critical_space_action_blocker>5</critical_space_action_blocker> <external_status>ok</external_status> <master>true</master> <storage> <address>mynfs.example.com</address> <nfs_version>v3</nfs_version> <path>/exports/mydata</path> <type>nfs</type> </storage> <storage_format>v3</storage_format> <type>data</type> <used>13958643712</used> <warning_low_space_indicator>10</warning_low_space_indicator> <wipe_after_delete>false</wipe_after_delete> <data_centers> <data_center href=\"/ovirt-engine/api/datacenters/456\" id=\"456\"/> </data_centers> </storage_domain>", "PUT /ovirt-engine/api/vms/123", "<vm> <cpu> <topology> <sockets>4</sockets> <cores>2</cores> <threads>2</threads> </topology> </cpu> </vm>", "PUT /ovirt-engine/api/vms/123", "<vm> <initialization/> </vm>", "PUT /ovirt-engine/api/vms/123", "<vm> <memory>1073741824</memory> </vm>", "PUT /api/vms/123", "<vm> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <placement_policy> <hosts> <host> <name>Host1</name> </host> <host> <name>Host2</name> </host> </hosts> <affinity>pinned</affinity> </placement_policy> </vm>", "<vm_numa_node href=\"/ovirt-engine/api/vms/123/numanodes/456\" id=\"456\"> <cpu> <cores> <core> <index>0</index> </core> </cores> </cpu> <index>0</index> <memory>1024</memory> <numa_node_pins> <numa_node_pin> <index>0</index> </numa_node_pin> </numa_node_pins> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\" /> </vm_numa_node>", "<statistics> <statistic href=\"/ovirt-engine/api/hosts/123/numanodes/456/statistics/789\" id=\"789\"> <name>memory.total</name> <description>Total memory</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>25165824000</datum> </value> </values> <host_numa_node href=\"/ovirt-engine/api/hosts/123/numanodes/456\" id=\"456\" /> </statistic> </statistics>", "PUT /ovirt-engine/api/vms/123", "<vm> <cpu> <topology> <sockets>4</sockets> <cores>2</cores> <threads>2</threads> </topology> </cpu> </vm>", "PUT /ovirt-engine/api/vms/123", "<vm> <initialization/> </vm>", "PUT /ovirt-engine/api/vms/123", "<vm> <memory>1073741824</memory> </vm>", "PUT /api/vms/123", "<vm> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <placement_policy> <hosts> <host> <name>Host1</name> </host> <host> <name>Host2</name> </host> </hosts> <affinity>pinned</affinity> </placement_policy> </vm>", "GET /ovirt-engine/api/vms/123/katelloerrata", "<katello_errata> <katello_erratum href=\"/ovirt-engine/api/katelloerrata/456\" id=\"456\"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> </packages> </katello_erratum> </katello_errata>", "PUT /ovirt-engine/api/vms/123", "<vm> <cpu> <topology> <sockets>4</sockets> <cores>2</cores> <threads>2</threads> </topology> </cpu> </vm>", "PUT /ovirt-engine/api/vms/123", "<vm> <initialization/> </vm>", "PUT /ovirt-engine/api/vms/123", "<vm> <memory>1073741824</memory> </vm>", "PUT /api/vms/123", "<vm> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <placement_policy> <hosts> <host> <name>Host1</name> </host> <host> <name>Host2</name> </host> </hosts> <affinity>pinned</affinity> </placement_policy> </vm>", "<vnic_profile_mappings> <vnic_profile_mapping> <source_network_name>red</source_network_name> <source_network_profile_name>gold</source_network_profile_name> <target_vnic_profile id=\"738dd914-8ec8-4a8b-8628-34672a5d449b\"/> </vnic_profile_mapping> <vnic_profile_mapping> <source_network_name>blue</source_network_name> <source_network_profile_name>silver</source_network_profile_name> <target_vnic_profile id=\"892a12ec-2028-4451-80aa-ff3bf55d6bac\"/> </vnic_profile_mapping> </vnic_profile_mappings>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/rest_api_guide/types
8.3. Cloning a Snapshot
8.3. Cloning a Snapshot A clone or a writable snapshot is a new volume, which is created from a particular snapshot. To clone a snapshot, execute the following command. where, clonename: It is the name of the clone, ie, the new volume that will be created. snapname: It is the name of the snapshot that is being cloned. Note Unlike restoring a snapshot, the original snapshot is still retained, after it has been cloned. The snapshot should be in activated state and all the snapshot bricks should be in running state before taking clone. Also the server nodes should be in quorum. This is a space efficient clone therefore both the Clone (new volume) and the snapshot LVM share the same LVM backend. The space consumption of the LVM grow as the new volume (clone) diverge from the snapshot. For example: To check the status of the newly cloned snapshot execute the following command For example: In the example it is observed that clone is in Created state, similar to a newly created volume. This volume should be explicitly started to use this volume.
[ "gluster snapshot clone < clonename > < snapname >", "gluster snapshot clone clone_vol snap1 snapshot clone: success: Clone clone_vol created successfully", "gluster vol info < clonename >", "gluster vol info clone_vol Volume Name: clone_vol Type: Distribute Volume ID: cdd59995-9811-4348-8e8d-988720db3ab9 Status: Created Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 10.00.00.01:/var/run/gluster/snaps/clone_vol/brick1/brick3 Options Reconfigured: performance.readdir-ahead: on" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/ch08s03
4.2. Designing the Directory Tree
4.2. Designing the Directory Tree There are several major decisions to plan in the directory tree design: Choosing a suffix to contain the data. Determining the hierarchical relationship among data entries. Naming the entries in the directory tree hierarchy. 4.2.1. Choosing a Suffix The suffix is the name of the entry at the root of the directory tree, and the directory data are stored beneath it. The directory can contain more than one suffix. It is possible to use multiple suffixes if there are two or more directory trees of information that do not have a natural common root. By default, the standard Directory Server deployment contains multiple suffixes, one for storing data and the others for data needed by internal directory operations (such as configuration information and the directory schema). For more information on these standard directory suffixes, see the Red Hat Directory Server Administration Guide . 4.2.1.1. Suffix Naming Conventions All entries in the directory should be located below a common base entry, the root suffix . When choosing a name for the root directory suffix, consider these four points to make the name effective: Globally unique. Static, so it rarely, if ever, changes. Short, so that entries beneath it are easier to read on screen. Easy for a person to type and remember. In a single enterprise environment, choose a directory suffix that aligns with a DNS name or Internet domain name of the enterprise. For example, if the enterprise owns the domain name of example.com , then the directory suffix is logically dc=example,dc=com . The dc attribute represents the suffix by breaking the domain name into its component parts. Normally, any attribute can be used to name the root suffix. However, for a hosting organization, limit the root suffix to the following attributes: dc defines an component of the domain name. c contains the two-digit code representing the country name, as defined by ISO. l identifies the county, city, or other geographical area where the entry is located or that is associated with the entry. st identifies the state or province where the entry resides. o identifies the name of the organization to which the entry belongs. The presence of these attributes allows for interoperability with subscriber applications. For example, a hosting organization might use these attributes to create a root suffix for one of its clients, example_a , such as o=example_a, st=Washington,c=US . Using an organization name followed by a country designation is typical of the X.500 naming convention for suffixes. 4.2.1.2. Naming Multiple Suffixes Each suffix used with the directory is a unique directory tree. There are several ways to include multiple trees in the directory service. The first is to create multiple directory trees stored in separate databases served by Directory Server. For example, create separate suffixes for example_a and example_b and store them in separate databases. Figure 4.1. Including Multiple Directory Trees in a Database The databases could be stored on a single server or multiple servers depending on resource constraints. 4.2.2. Creating the Directory Tree Structure Decide whether to use a flat or a hierarchical tree structure. As a general rule, try to make the directory tree as flat as possible. However, a certain amount of hierarchy can be important later when information is partitioned across multiple databases, when preparing replication, or when setting access controls. The structure of the tree involves the following steps and considerations: Section 4.2.2.1, "Branching the Directory" Section 4.2.2.2, "Identifying Branch Points" Section 4.2.2.3, "Replication Considerations" Section 4.2.2.4, "Access Control Considerations" 4.2.2.1. Branching the Directory Design the hierarchy to avoid problematic name changes. The flatter a namespace is, the less likely the names are to change. The likelihood of a name changing is roughly proportional to the number of components in the name that can potentially change. The more hierarchical the directory tree, the more components in the names, and the more likely the names are to change. Following are some guidelines for designing the directory tree hierarchy: Branch the tree to represent only the largest organizational subdivisions in the enterprise. Any such branch points should be limited to divisions (such as Corporate Information Services, Customer Support, Sales, and Engineering). Make sure that the divisions used to branch the directory tree are stable; do not perform this kind of branching if the enterprise reorganizes frequently. Use functional or generic names rather than actual organizational names for the branch points. Names change. While subtrees can be renamed, it can be a long and resource-intensive process for large suffixes with many children entries. Using generic names that represent the function of the organization (for example, use Engineering instead of Widget Research and Development ) makes it much less likely that you will need to rename a subtree after organizational or project changes. If there are multiple organizations that perform similar functions, try creating a single branch point for that function instead of branching based along divisional lines. For example, even if there are multiple marketing organizations, each of which is responsible for a specific product line, create a single ou=Marketing subtree. All marketing entries then belong to that tree. Branching in an Enterprise Environment Name changes can be avoided if the directory tree structure is based on information that is not likely to change. For example, base the structure on types of objects in the tree rather than organizations. This helps avoid shuffling an entry between organizational units, which requires modifying the distinguished name (DN), which is an expensive operation. There are a handful of common objects that are good to use to define the structure: ou=people ou=groups ou=services A directory tree organized using these objects might appear as shown below. Figure 4.2. Example Environment Directory Tree Branching in a Hosting Environment For a hosting environment, create a tree that contains two entries of the object class organization ( o ) and one entry of the object class organizationalUnit ( ou ) beneath the root suffix. For example, Example ISP branches their directory as shown below. Figure 4.3. Example Hosting Directory Tree 4.2.2.2. Identifying Branch Points When planning the branches in the directory tree, decide what attributes to use to identify the branch points. Remember that a DN is a unique string composed of attribute-data pairs. For example, the DN of an entry for Barbara Jensen, an employee of Example Corp., is uid=bjensen,ou=people,dc=example,dc=com . Each attribute-data pair represents a branch point in the directory tree, as shown in Figure 4.4, "The Directory Tree for Example Corp." for the directory tree for the enterprise Example Corp. Figure 4.4. The Directory Tree for Example Corp. Figure 4.5, "Directory Tree for Example ISP" shows the directory tree for Example ISP, an Internet host. Figure 4.5. Directory Tree for Example ISP Beneath the suffix entry c=US,o=example , the tree is split into three branches. The ISP branch contains customer data and internal information for Example ISP. The internet branch is the domain tree. The groups branch contains information about the administrative groups. Consider the following when choosing attributes for the branch points: Be consistent. Some LDAP client applications may be confused if the distinguished name (DN) format is inconsistent across the directory tree. That is, if l is subordinate to ou in one part of the directory tree, then make sure l is subordinate to ou in all other parts of the directory service. Try to use only the traditional attributes (shown in Section 4.2.2.2, "Identifying Branch Points" ). Using traditional attributes increases the likelihood of retaining compatibility with third-party LDAP client applications. Using the traditional attributes also means that they are known to the default directory schema, which makes it easier to build entries for the branch DN. Table 4.1. Traditional DN Branch Point Attributes Attribute Definition dc An element of the domain name, such as dc=example ; this is frequently specified in pairs, or even longer, depending on the domain, such as dc=example,dc=com or dc=mtv,dc=example,dc=com . c A country name. o An organization name. This attribute is typically used to represent a large divisional branching such as a corporate division, academic discipline (the humanities, the sciences), subsidiary, or other major branching within the enterprise, as in Section 4.2.1.1, "Suffix Naming Conventions" . ou An organizational unit. This attribute is typically used to represent a smaller divisional branching of the enterprise than an organization. Organizational units are generally subordinate to the preceding organization. st A state or province name. l or locality A locality, such as a city, country, office, or facility name. Note A common mistake is to assume that the directory is searched based on the attributes used in the distinguished name. The distinguished name is only a unique identifier for the directory entry and cannot be used as a search key. Instead, search for entries based on the attribute-data pairs stored on the entry itself. Thus, if the distinguished name of an entry is uid=bjensen,ou=People,dc=example,dc=com , then a search for dc=example does not match that entry unless dc:example has explicitly been added as an attribute in that entry. 4.2.2.3. Replication Considerations During the directory tree design process, consider which entries are being replicated. A natural way to describe a set of entries to be replicated is to specify the DN at the top of a subtree and replicate all entries below it. This subtree also corresponds to a database, a directory partition containing a portion of the directory data. For example, in an enterprise environment, one method is to organize the directory tree so that it corresponds to the network names in the enterprise. Network names tend not to change, so the directory tree structure is stable. Further, using network names to create the top level branches of the directory tree is useful when using replication to tie together different Directory Servers. For instance, Example Corp. has three primary networks known as flightdeck.example.com , tickets.example.com , and hangar.example.com . They initially branch their directory tree into three main groups for their major organizational divisions. Figure 4.6. Initial Branching of the Directory Tree for Example Corp. After creating the initial structure of the tree, they create additional branches that show the breakdown of each organizational group. Figure 4.7. Extended Branching for Example Corp. The Example ISP branches their directory in an asymmetrical tree that mirrors their organization. Figure 4.8. Directory Branching for Example ISP After creating the initial structure of their directory tree, they create additional branches for logical subgroups. Figure 4.9. Extended Branching for Example ISP Both the enterprise and the hosting organization design their data hierarchies based on information that is not likely to change often. 4.2.2.4. Access Control Considerations Introducing a hierarchy into the directory tree can be used to enable certain types of access control. As with replication, it is easier to group similar entries and then administer them from a single branch. It is also possible to enable the distribution of administration through a hierarchical directory tree. For example, to give an administrator from the marketing department access to the marketing entries and an administrator from the sales department access to the sales entries, design the directory tree according to those divisions. Access controls can be based on the directory content rather than the directory tree. The filtered mechanism can define a single access control rule stating that a directory entry has access to all entries containing a particular attribute value. For example, set an ACI filter that gives the sales administrator access to all the entries containing the attribute value ou=Sales . However, ACI filters can be difficult to manage. Decide which method of access control is best suited to the directory: organizational branching in the directory tree hierarchy, ACI filters, or a combination of the two. 4.2.3. Naming Entries After designing the hierarchy of the directory tree, decide which attributes to use when naming the entries within the structure. Generally, names are created by choosing one or more of the attribute values to form a relative distinguished name (RDN) . The RDN is a single component within the DN. This is the very first component shown, so the attribute used for that component is the naming attribute , because it sets the unique name for the entry. The attributes to use depends on the type of entry being named. The entry names should adhere to the following rules: The attribute selected for naming should be unlikely to change. The name must be unique across the directory. A unique name ensures that a DN can see at most one entry in the directory. When creating entries, define the RDN within the entry. By defining at least the RDN within the entry, the entry can be located more easily. This is because searches are not performed against the actual DN but rather the attribute values stored in the entry itself. Attribute names have a meaning, so try to use the attribute name that matches the type of entry it represents. For example, do not use l to represent an organization, or c to represent an organizational unit. Section 4.2.3.1, "Naming Person Entries" Section 4.2.3.2, "Naming Group Entries" Section 4.2.3.3, "Naming Organization Entries" Section 4.2.3.4, "Naming Other Kinds of Entries" 4.2.3.1. Naming Person Entries The person entry's name, the DN, must be unique. Traditionally, distinguished names use the commonName , or cn , attribute to name their person entries. That is, an entry for a person named Babs Jensen might have the distinguished name of cn=Babs Jensen,dc=example,dc=com . While using the common name makes it easier to associated the person with the entry, it might not be unique enough to exclude people with identical names. This quickly leads to a problem known as DN name collisions , multiple entries with the same distinguished name. Avoid common name collisions by adding a unique identifier to the common name, such as cn=Babs Jensen+employeeNumber=23,dc=example,dc=com . However, this can lead to awkward common names for large directories and can be difficult to maintain. A better method is to identify the person entries with some attribute other than cn . Consider using one of the following attributes: uid Use the uid attribute to specify some unique value of the person. Possibilities include a user login ID or an employee number. A subscriber in a hosting environment should be identified by the uid attribute. mail The mail attribute contains a person's email address, which is always unique. This option can lead to awkward DNs that include duplicate attribute values (such as [email protected],dc=example,dc=com ), so use this option only if there is not some other unique value to use with the uid attribute. For example, use the mail attribute instead of the uid attribute if the enterprise does not assign employee numbers or user IDs for temporary or contract employees. employeeNumber For employees of the inetOrgPerson object class, consider using an employer assigned attribute value such as employeeNumber . Whatever is used for an attribute-data pair for person entry RDNs, make sure that they are unique, permanent values. Person entry RDNs should also be readable. For example, uid=bjensen,dc=example,dc=com is preferable to uid=b12r56A,dc=example,dc=com because recognizable DNs simplify some directory tasks, such as changing directory entries based on their distinguished names. Also, some directory client applications assume that the uid and cn attributes use human-readable names. Considerations for Person Entries in a Hosted Environment If a person is a subscriber to a service, the entry should be of object class inetUser , and the entry should contain the uid attribute. The attribute must be unique within a customer subtree. If a person is part of the hosting organization, represent them as an inetOrgPerson with the nsManagedPerson object class. Placing Person Entries in the DIT The following are some guidelines for placing person entries in the directory tree: People in an enterprise should be located in the directory tree below the organization's entry. Subscribers to a hosting organization need to be below the ou=people branch for the hosted organization. 4.2.3.2. Naming Group Entries There are four main ways to represent a group: A static group explicitly defines is members. The groupOfNames or groupOfUniqueNames object classes contain values naming the members of the group. Static groups are suitable for groups with few members, such as the group of directory administrators. Static groups are not suitable for groups with thousands of members. Static group entries must contain a uniqueMember attribute value because uniqueMember is a mandatory attribute of the groupOfUniqueNames object. This object class requires the cn attribute, which can be used to form the DN of the group entry. A dynamic group uses an entry representing the group with a search filter and subtree. Entries matching the filter are members of the group. Roles unify the static and dynamic group concept. See Section 4.3, "Grouping Directory Entries" for more information. In a deployment containing hosted organizations, consider using the groupOfUniqueNames object class to contain the values naming the members of groups used in directory administration. In a hosted organization, we also recommend that group entries used for directory administration be located under the ou=Groups branch. 4.2.3.3. Naming Organization Entries The organization entry name, like other entry names, must be unique. Using the legal name of the organization along with other attribute values helps ensure the name is unique, such as o=example_a+st=Washington,o=ISP,c=US . Trademarks can also be used, but they are not guaranteed to be unique. In a hosting environment, use the organization ( o ) attribute as the naming attribute. 4.2.3.4. Naming Other Kinds of Entries The directory contains entries that represent many things, such as localities, states, countries, devices, servers, network information, and other kinds of data. For these types of entries, use the cn attribute in the RDN if possible. Then, for naming a group entry, name it something like cn=administrators,dc=example,dc=com . However, sometimes an entry's object class does not support the commonName attribute. Instead, use an attribute that is supported by the entry's object class. The attributes used for the entry's DN do not have to correspond to the attributes actually used in the entry. However, having some correlation between the DN attributes and attributes used by the entry simplifies administration of the directory tree. 4.2.4. Renaming Entries and Subtrees Section 4.2.3, "Naming Entries" talks about the importance of naming entries in Red Hat Directory Server. The entry names, in a sense, define the directory tree structure. Each branch point (each entry which has entries beneath it) creates a new link in the hierarchy. Example 4.1. Building Entry DNs When the naming attribute of an entry, the leftmost element of the DN, is changed, this is a modrdn operation . That's a special kind of modify operation because, in a sense, it moves the entry within the directory tree. For leaf entries (entries with no children), modrdn operations are lateral moves; the entry has the same parent, just a new name. Figure 4.10. modrdn Operations for a Leaf Entry For subtree entries, the modrdn operation not only renames the subtree entry itself, but also changes the DN components of all of the children entries beneath the subtree. Figure 4.11. modrdn Operations for a Subtree Entry Important Subtree modrdn operations also move and rename all of the child entries beneath the subtree entry. For large subtrees, this can be a time- and resource-intensive process. Plan the naming structure of your directory tree hierarchy so that it will not require frequent subtree rename operations. A similar action to renaming a subtree is moving an entry from one subtree to another. This is an expanded type of modrdn operation, which simultaneously renames the entry (even if it is the same name) and sets a newsuperior attribute which moves the entry from one parent to another. Figure 4.12. modrdn Operations to a New Parent Entry Both new superior and subtree rename operations are possible because of how entries are stored in the entryrdn.db index. Each entry is identified by its own key (a self-link ) and then a subkey which identifies its parent (the parent link ) and any children. This has a format that lays out the directory tree hierarchy by treating parents and children as attribute to an entry, and every entry is described by a unique ID and its RDN, rather than the full DN. For example, the ou=people subtree has a parent of dc=example,dc=com and a child of uid=jsmith . There are some things to keep in mind when performing rename operations: You cannot rename the root suffix. Subtree rename operations have minimal effect on replication. Replication agreements are applied to an entire database, not a subtree within the database, so a subtree rename operation does not require re-configuring a replication agreement. All of the name changes after a subtree rename operation are replicated as normal. Renaming a subtree may require any synchronization agreements to be re-configured. Sync agreements are set at the suffix or subtree level, so renaming a subtree may break synchronization. Renaming a subtree requires that any subtree-level ACIs set for the subtree be re-configured manually, as well as any entry-level ACIs set for child entries of the subtree. You can rename a subtree with children, but you cannot delete a subtree with children. Trying to change the component of a subtree, like moving from ou to dc , may fail with a schema violation. For example, the organizationalUnit object class requires the ou attribute. If that attribute is removed as part of renaming the subtree, then the operation will fail.
[ "dc=example,dc=com => root suffix ou=People,dc=example,dc=com => org unit st=California,ou=People,dc=example,dc=com => state/province l=Mountain View,st=California,ou=People,dc=example,dc=com => city ou=Engineering,l=Mountain View,st=California,ou=People,dc=example,dc=com => org unit uid=jsmith,ou=Engineering,l=Mountain View,st=California,ou=People,dc=example,dc=com => leaf entry", "numeric_id : RDN => self link ID: # ; RDN: \" rdn \"; NRDN: normalized_rdn P # : RDN => parent link ID: # ; RDN: \" rdn \"; NRDN: normalized_rdn C # : RDN => child link ID: # ; RDN: \" rdn \"; NRDN: normalized_rdn", "4:ou=people ID: 4; RDN: \"ou=People\"; NRDN: \"ou=people\" P4:ou=people ID: 1; RDN: \"dc=example,dc=com\"; NRDN: \"dc=example,dc=com\" C4:ou=people ID: 10; RDN: \"uid=jsmith\"; NRDN: \"uid=jsmith\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_the_directory_tree-designing_directory_tree
Chapter 35. Virtualization
Chapter 35. Virtualization SMEP and SMAP bits masked to enable secondary vCPUs Previously, disabling Extended Page Table (EPT) on a host that supported Supervisor Mode Execution Protection (SMEP) or Supervisor Mode Access Protection (SMAP) resulted in guests being restricted to a single vCPU. This update masks SMEP and SMAP bits on the host side when necessary. As a result, secondary vCPUs start and can be used by the guest virtual machine. (BZ#1273807) Force Reset menu entry in Japanese locale Virtual Machine Manager translated correctly Previously, the Force Reset menu entry was translated incorrectly in the Japanese locale Virtual Machine Manager. In this update the Force Reset menu entry is translated correctly. (BZ#1282276) Limited KSM deduplication factor Previously, the kernel same-page merging (KSM) deduplication factor was not explicitly limited, which caused Red Hat Enterprise Linux hosts to have performance problems or become unresponsive in case of high workloads. This update limits the KSM deduplication factor, and thus eliminates the described problems with virtual memory operations related to KSM pages. (BZ#1298618) VMDK images with streamOptimized sub-format are accepted Previously, a Virtual Machine Disk (VMDK) image with a streamOptimized sub-format created by the qemu-img tool was rejected by Elastic Sky X (ESX) services, because the version number of the VMDK image was too low. In this update, the sub-format number of streamOptimized VMDK images are automatically increased. This results in the VMDK image being accepted by ESX services. (BZ# 1299116 ) Data layout of VMDK images with streamOptimized sub-format was incorrect Previously, the data layout of a Virtual Machine Disk (VMDK) image with a streamOptimized sub-format created by the qemu-img tool was incorrect. This prevented the VMDK image from being bootable when imported to ESX servers. In this update, the image is converted to a valid VMDK streamOptimized image. This results in the VMDK image being bootable. (BZ# 1299250 ) blockcopy with --pivot option no longer fails Previously, blockcopy always failed when the --pivot option was specified. With this release, the libvirt package was updated to prevent this issue. blockcopy can now be used with the --pivot option. (BZ# 1197592 ) Guest display problems after virt-v2v conversion have been fixed Previously, the video card driver setting of a guest converted with the virt-v2v utility was ignored, causing various display problems in the guest. This update ensures that virt-v2v generates the libvirt XML file for the converted guest properly. As a result, the video card setting is preserved, and the guest can take full advantage of graphical capabilities after the conversion. (BZ# 1225789 ) Migrating MSR_TSC_AUX works properly Previously, the contents of the MSR_TSC_AUX file were sometimes not migrated correctly during guest migration. As a consequence, the guest terminated unexpectedly after the migration finished. This update ensures that the contents of MSR_TSC_AUX are migrated as expected, and the described crashes no longer occur. (BZ# 1265427 ) Windows guest virtual machine information removed from documentation In this update, all references to Windows guest virtual machines have been removed from the documentation. The information was moved to the following knowledgebase article: https://access.redhat.com/articles/2470791 (BZ# 1262007 ) Accessing guest disks on virt-manager works properly with SELinux and libguestfs-python Prior to this update, when the libguestfs-python package was installed and SELinux was enabled on the host machine, accessing guest disks using the virt-manager interface caused I/O failures. Now, virt-manager and the libguestfs library share the same libvirt connection, which prevents the described failures from occurring. (BZ# 1173695 )
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/bug_fixes_virtualization
Chapter 26. A Starting Point WSDL Contract
Chapter 26. A Starting Point WSDL Contract 26.1. Sample WSDL Contract Example 26.1, "HelloWorld WSDL Contract" shows the HelloWorld WSDL contract. This contract defines a single interface, Greeter, in the wsdl:portType element. The contract also defines the endpoint which will implement the service in the wsdl:port element. Example 26.1. HelloWorld WSDL Contract The Greeter interface defined in Example 26.1, "HelloWorld WSDL Contract" defines the following operations: sayHi - Has a single output parameter, of xsd:string . greetMe - Has an input parameter, of xsd:string , and an output parameter, of xsd:string . greetMeOneWay - Has a single input parameter, of xsd:string . Because this operation has no output parameters, it is optimized to be a oneway invocation (that is, the consumer does not wait for a response from the server). pingMe - Has no input parameters and no output parameters, but it can raise a fault exception.
[ "<?xml version=\"1.0\" encoding=\";UTF-8\"?> <wsdl:definitions name=\"HelloWorld\" targetNamespace=\"http://apache.org/hello_world_soap_http\" xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:tns=\"http://apache.org/hello_world_soap_http\" xmlns:x1=\"http://apache.org/hello_world_soap_http/types\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\"> <wsdl:types> <schema targetNamespace=\"http://apache.org/hello_world_soap_http/types\" xmlns=\"http://www.w3.org/2001/XMLSchema\" elementFormDefault=\"qualified\"> <element name=\"sayHiResponse\"> <complexType> <sequence> <element name=\"responseType\" type=\"string\"/> </sequence> </complexType> </element> <element name=\"greetMe\"> <complexType> <sequence> <element name=\"requestType\" type=\"string\"/> </sequence> </complexType> </element> <element name=\"greetMeResponse\"> <complexType> <sequence> <element name=\"responseType\" type=\"string\"/> </sequence> </complexType> </element> <element name=\"greetMeOneWay\"> <complexType> <sequence> <element name=\"requestType\" type=\"string\"/> </sequence> </complexType> </element> <element name=\"pingMe\"> <complexType/> </element> <element name=\"pingMeResponse\"> <complexType/> </element> <element name=\"faultDetail\"> <complexType> <sequence> <element name=\"minor\" type=\"short\"/> <element name=\"major\" type=\"short\"/> </sequence> </complexType> </element> </schema> </wsdl:types> <wsdl:message name=\"sayHiRequest\"> <wsdl:part element=\"x1:sayHi\" name=\"in\"/> </wsdl:message> <wsdl:message name=\"sayHiResponse\"> <wsdl:part element=\"x1:sayHiResponse\" name=\"out\"/> </wsdl:message> <wsdl:message name=\"greetMeRequest\"> <wsdl:part element=\"x1:greetMe\" name=\"in\"/> </wsdl:message> <wsdl:message name=\"greetMeResponse\"> <wsdl:part element=\"x1:greetMeResponse\" name=\"out\"/> </wsdl:message> <wsdl:message name=\"greetMeOneWayRequest\"> <wsdl:part element=\"x1:greetMeOneWay\" name=\"in\"/> </wsdl:message> <wsdl:message name=\"pingMeRequest\"> <wsdl:part name=\"in\" element=\"x1:pingMe\"/> </wsdl:message> <wsdl:message name=\"pingMeResponse\"> <wsdl:part name=\"out\" element=\"x1:pingMeResponse\"/> </wsdl:message> <wsdl:message name=\"pingMeFault\"> <wsdl:part name=\"faultDetail\" element=\"x1:faultDetail\"/> </wsdl:message> <wsdl:portType name=\"Greeter\"> <wsdl:operation name=\"sayHi\"> <wsdl:input message=\"tns:sayHiRequest\" name=\"sayHiRequest\"/> <wsdl:output message=\"tns:sayHiResponse\" name=\"sayHiResponse\"/> </wsdl:operation> <wsdl:operation name=\"greetMe\"> <wsdl:input message=\"tns:greetMeRequest\" name=\"greetMeRequest\"/> <wsdl:output message=\"tns:greetMeResponse\" name=\"greetMeResponse\"/> </wsdl:operation> <wsdl:operation name=\"greetMeOneWay\"> <wsdl:input message=\"tns:greetMeOneWayRequest\" name=\"greetMeOneWayRequest\"/> </wsdl:operation> <wsdl:operation name=\"pingMe\"> <wsdl:input name=\"pingMeRequest\" message=\"tns:pingMeRequest\"/> <wsdl:output name=\"pingMeResponse\" message=\"tns:pingMeResponse\"/> <wsdl:fault name=\"pingMeFault\" message=\"tns:pingMeFault\"/> </wsdl:operation> </wsdl:portType> <wsdl:binding name=\"Greeter_SOAPBinding\" type=\"tns:Greeter\"> </wsdl:binding> <wsdl:service name=\"SOAPService\"> <wsdl:port binding=\"tns:Greeter_SOAPBinding\" name=\"SoapPort\"> <soap:address location=\"http://localhost:9000/SoapContext/SoapPort\"/> </wsdl:port> </wsdl:service> </wsdl:definitions>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxwswsdldevcontract
8.186. rsyslog
8.186. rsyslog 8.186.1. RHBA-2013:1716 - rsyslog bug fix update Updated rsyslog packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The rsyslog packages provide an enhanced, multi-threaded syslog daemon. It supports MySQL, syslog/TCP, RFC 3195, permitted sender lists, filtering on any message part, and fine grain output format control. Bug Fixes BZ# 862517 The imgssapi module is initialized as soon as the configuration file reader encounters the USDInputGSSServerRun directive in the /etc/rsyslog.conf configuration file. The supplementary options configured after USDInputGSSServerRun are therefore ignored. For configuration to take effect, all imgssapi configuration options must be placed before USDInputGSSServerRun. Previously, when this order was reversed, the rsyslogd daemon terminated unexpectedly with a segmentation fault. This bug has been fixed, and rsyslogd no longer crashes in the described scenario. BZ# 886117 Rsyslog directives used for controlling the file owner or group (FileOwner, FileGroup, DirOwner, DirGroup) translate names to numerical IDs only during rsyslogs's initialization. Previously, when user data were not available at rsyslogs's startup, IDs where not assigned to these log files. With this update, new directives that do not depend on the translation process have been added (FileOwnerId, FileGroupId, DirOwnerId, DirGroupId). As a result, log files are assigned the correct user or group ID even when user information is not available during rsyslog's startup. BZ# 893197 Due to a bug in the source code, the host name was replaced by an empty string if the USDRepeatedMsgReduction directive was enabled. This bug has been fixed, and the host name is now stored correctly when USDRepeatedMsgReduction is on. BZ# 924754 Prior to this update, the USDFileGroup directive did not process groups larger than a certain size. Consequently, when this size was reached, the rsyslogd daemon failed to set the requested group and the root user was left as the owner of a file. This bug has been fixed and USDFileGroup now creates groups properly in the described case. BZ# 927405 An erroneous patch in a release, which changed the implementation of the configuration file parser, caused the rsyslogd daemon to terminate unexpectedly with a segmentation fault for certain configurations. With this update, the patch has been removed, and file crashes no longer occur with the default configuration. However, the USDIncludeConfig directive must be placed at the beginning of the /etc/rsyslog.conf configuration file before other directives. If there is need to use USDIncludeConfig further in the file, users are advised to prepend it with a dummy action such as "syslog.debug /dev/null". BZ# 951727 Prior to this update, a numerical value of the PRI property was appended to the pri-text variable. The resulting pri-text value looked for example like "local0.info≶164>". With this update the suffix has been removed. Now, the variable only contains textual facility and severity values. BZ# 963942 Previously, an incorrect data type was set for the variable holding the spool file size limit. Consequently, the intended size limit was not accepted and a message loss could occur. With this update, the data type of the aforementioned variable has been corrected. As a result, spool files are set correctly with the user-defined size limit. Users of rsyslog are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/rsyslog
1.4. Additional Resources
1.4. Additional Resources To find more information about resource control under systemd , the unit hierarchy, as well as the kernel resource controllers, see the materials listed below: Installed Documentation Cgroup-Related Systemd Documentation The following manual pages contain general information on unified cgroup hierarchy under systemd : systemd.resource-control (5) - describes the configuration options for resource control shared by system units. systemd.unit (5) - describes common options of all unit configuration files. systemd.slice (5) - provides general information about .slice units. systemd.scope (5) - provides general information about .scope units. systemd.service (5) - provides general information about .service units. Controller-Specific Kernel Documentation The kernel-doc package provides a detailed documentation of all resource controllers. This package is included in the Optional subscription channel. Before subscribing to the Optional channel, see the Scope of Coverage Details for Optional software, then follow the steps documented in the article called How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Manager (RHSM)? on Red Hat Customer Portal. To install kernel-doc from the Optional channel, type as root : After the installation, the following files will appear under the /usr/share/doc/kernel-doc- <kernel_version> /Documentation/cgroups/ directory: blkio subsystem - blkio-controller.txt cpuacct subsystem - cpuacct.txt cpuset subsystem - cpusets.txt devices subsystem - devices.txt freezer subsystem - freezer-subsystem.txt memory subsystem - memory.txt net_cls subsystem - net_cls.txt Additionally, see the following files on further information about the cpu subsystem: Real-Time scheduling - /usr/share/doc/kernel-doc- <kernel_version> /Documentation/scheduler/sched-rt-group.txt CFS scheduling - /usr/share/doc/kernel-doc- <kernel_version> /Documentation/scheduler/sched-bwc.txt Online Documentation Red Hat Enterprise Linux 7 System Administrator's Guide - The System Administrator's Guide documents relevant information regarding the deployment, configuration, and administration of Red Hat Enterprise Linux 7. This guide contains a detailed explanation of the systemd concepts as well as instructions for service management with systemd . The D-Bus API of systemd - The reference material for D-Bus API commands used to interact with systemd .
[ "~]# yum install kernel-doc" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/sec-introduction_to_control_groups-additional_resources
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/making-open-source-more-inclusive
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.432_release_notes/providing-direct-documentation-feedback_openjdk
Chapter 4. Installing a cluster with customizations
Chapter 4. Installing a cluster with customizations Use the following procedures to install an OpenShift Container Platform cluster with customizations using the Agent-based Installer. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall or proxy, you configured it to allow the sites that your cluster requires access to. 4.2. Installing OpenShift Container Platform with the Agent-based Installer The following procedures deploy a single-node OpenShift Container Platform in a disconnected environment. You can use these procedures as a basis and modify according to your requirements. 4.2.1. Downloading the Agent-based Installer Procedure Use this procedure to download the Agent-based Installer and the CLI needed for your installation. Log in to the OpenShift Container Platform web console using your login credentials. Navigate to Datacenter . Click Run Agent-based Installer locally . Select the operating system and architecture for the OpenShift Installer and Command line interface . Click Download Installer to download and extract the install program. Download or copy the pull secret by clicking on Download pull secret or Copy pull secret . Click Download command-line tools and place the openshift-install binary in a directory that is on your PATH . 4.2.2. Creating the preferred configuration inputs Use this procedure to create the preferred configuration inputs used to create the agent image. Procedure Install nmstate dependency by running the following command: USD sudo dnf install /usr/bin/nmstatectl -y Place the openshift-install binary in a directory that is on your PATH . Create a directory to store the install configuration by running the following command: USD mkdir ~/<directory_name> Note This is the preferred method for the Agent-based installation. Using GitOps ZTP manifests is optional. Create the install-config.yaml file by running the following command: USD cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: '<pull_secret>' 4 sshKey: '<ssh_pub_key>' 5 EOF 1 Specify the system architecture, valid values are amd64 and arm64 . 2 Required. Specify your cluster name. 3 Specify the cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 4 Specify your pull secret. 5 Specify your SSH public key. Note If you set the platform to vSphere or baremetal , you can configure IP address endpoints for cluster nodes in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) IPv6 is supported only on bare metal platforms. Example of dual-stack networking networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5 Note When you use a disconnected mirror registry, you must add the certificate file that you created previously for your mirror registry to the additionalTrustBundle field of the install-config.yaml file. Create the agent-config.yaml file by running the following command: USD cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.2 -hop-interface: eno1 table-id: 254 EOF 1 This IP address is used to determine which node performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host's IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig . 2 Optional: Host configuration. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters. 3 Optional: Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods. 4 Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. It examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. 5 Optional: Configures the network interface of a host in NMState format. Additional resources Configuring the Agent-based Installer to use mirrored images 4.2.3. Using ZTP manifests As an optional task, you can use GitOps Zero Touch Provisioning (ZTP) manifests to configure your installation beyond the options available through the install-config.yaml and agent-config.yaml files. Note GitOps ZTP manifests can be generated with or without configuring the install-config.yaml and agent-config.yaml files beforehand. If you chose to configure the install-config.yaml and agent-config.yaml files, the configurations will be imported to the ZTP cluster manifests when they are generated. Prerequisites You have placed the openshift-install binary in a directory that is on your PATH . Optional: You have created and configured the install-config.yaml and agent-config.yaml files. Procedure Generate ZTP cluster manifests by running the following command: USD openshift-install agent create cluster-manifests --dir <installation_directory> Important If you have created the install-config.yaml and agent-config.yaml files, those files are deleted and replaced by the cluster manifests generated through this command. Any configurations made to the install-config.yaml and agent-config.yaml files are imported to the ZTP cluster manifests when you run the openshift-install agent create cluster-manifests command. Navigate to the cluster-manifests directory by running the following command: USD cd <installation_directory>/cluster-manifests Configure the manifest files in the cluster-manifests directory. For sample files, see the "Sample GitOps ZTP custom resources" section. Disconnected clusters: If you did not define mirror configuration in the install-config.yaml file before generating the ZTP manifests, perform the following steps: Navigate to the mirror directory by running the following command: USD cd ../mirror Configure the manifest files in the mirror directory. Additional resources Sample GitOps ZTP custom resources . See Challenges of the network far edge to learn more about GitOps ZTP. 4.2.4. Creating and booting the agent image Use this procedure to boot the agent image on your machines. Procedure Create the agent image by running the following command: USD openshift-install --dir <install_directory> agent create image Note Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with a default /etc/multipath.conf configuration. Boot the agent.x86_64.iso or agent.aarch64.iso image on the bare metal machines. 4.2.5. Verifying that the current installation host can pull release images After you boot the agent image and network services are made available to the host, the agent console application performs a pull check to verify that the current host can retrieve release images. If the primary pull check passes, you can quit the application to continue with the installation. If the pull check fails, the application performs additional checks, as seen in the Additional checks section of the TUI, to help you troubleshoot the problem. A failure for any of the additional checks is not necessarily critical as long as the primary pull check succeeds. If there are host network configuration issues that might cause an installation to fail, you can use the console application to make adjustments to your network configurations. Important If the agent console application detects host network configuration issues, the installation workflow will be halted until the user manually stops the console application and signals the intention to proceed. Procedure Wait for the agent console application to check whether or not the configured release image can be pulled from a registry. If the agent console application states that the installer connectivity checks have passed, wait for the prompt to time out to continue with the installation. Note You can still choose to view or change network configuration settings even if the connectivity checks have passed. However, if you choose to interact with the agent console application rather than letting it time out, you must manually quit the TUI to proceed with the installation. If the agent console application checks have failed, which is indicated by a red icon beside the Release image URL pull check, use the following steps to reconfigure the host's network settings: Read the Check Errors section of the TUI. This section displays error messages specific to the failed checks. Select Configure network to launch the NetworkManager TUI. Select Edit a connection and select the connection you want to reconfigure. Edit the configuration and select OK to save your changes. Select Back to return to the main screen of the NetworkManager TUI. Select Activate a Connection . Select the reconfigured network to deactivate it. Select the reconfigured network again to reactivate it. Select Back and then select Quit to return to the agent console application. Wait at least five seconds for the continuous network checks to restart using the new network configuration. If the Release image URL pull check succeeds and displays a green icon beside the URL, select Quit to exit the agent console application and continue with the installation. 4.2.6. Tracking and verifying installation progress Use the following procedure to track installation progress and to verify a successful installation. Prerequisites You have configured a DNS record for the Kubernetes API server. Procedure Optional: To know when the bootstrap host (rendezvous host) reboots, run the following command: USD ./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <install_directory> , specify the path to the directory where the agent ISO was generated. 2 To view different installation details, specify warn , debug , or error instead of info . Example output ................................................................... ................................................................... INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. To track the progress and verify successful installation, run the following command: USD openshift-install --dir <install_directory> agent wait-for install-complete 1 1 For <install_directory> directory, specify the path to the directory where the agent ISO was generated. Example output ................................................................... ................................................................... INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com Note If you are using the optional method of GitOps ZTP manifests, you can configure IP address endpoints for cluster nodes through the AgentClusterInstall.yaml file in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) IPv6 is supported only on bare metal platforms. Example of dual-stack networking apiVIP: 192.168.11.3 ingressVIP: 192.168.11.4 clusterDeploymentRef: name: mycluster imageSetRef: name: openshift-4.13 networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes Additional resources See Deploying with dual-stack networking . See Configuring the install-config yaml file . See Configuring a three-node cluster to deploy three-node clusters in bare metal environments. See About root device hints . See NMState state examples . 4.3. Sample GitOps ZTP custom resources You can optionally use GitOps Zero Touch Provisioning (ZTP) custom resource (CR) objects to install an OpenShift Container Platform cluster with the Agent-based Installer. You can customize the following GitOps ZTP custom resources to specify more details about your OpenShift Container Platform cluster. The following sample GitOps ZTP custom resources are for a single-node cluster. Example agent-cluster-install.yaml file apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: test-agent-cluster-install namespace: cluster0 spec: clusterDeploymentRef: name: ostest imageSetRef: name: openshift-4.13 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <ssh_public_key> Example cluster-deployment.yaml file apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: ostest namespace: cluster0 spec: baseDomain: test.metalkube.org clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: test-agent-cluster-install version: v1beta1 clusterName: ostest controlPlaneConfig: servingCertificates: {} platform: agentBareMetal: agentSelector: matchLabels: bla: aaa pullSecretRef: name: pull-secret Example cluster-image-set.yaml file apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.13 spec: releaseImage: registry.ci.openshift.org/ocp/release:4.13.0-0.nightly-2022-06-06-025509 Example infra-env.yaml file apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: cluster0 spec: clusterRef: name: ostest namespace: cluster0 cpuArchitecture: aarch64 pullSecretRef: name: pull-secret sshAuthorizedKey: <ssh_public_key> nmStateConfigLabelSelector: matchLabels: cluster0-nmstate-label-name: cluster0-nmstate-label-value Example nmstateconfig.yaml file apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.122.1 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.122.1 -hop-interface: eth0 table-id: 254 interfaces: - name: "eth0" macAddress: 52:54:01:aa:aa:a1 Example pull-secret.yaml file apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret namespace: cluster0 stringData: .dockerconfigjson: <pull_secret> Additional resources See Challenges of the network far edge to learn more about GitOps Zero Touch Provisioning (ZTP). 4.4. Gathering log data from a failed Agent-based installation Use the following procedure to gather log data about a failed Agent-based installation to provide for a support case. Prerequisites You have configured a DNS record for the Kubernetes API server. Procedure Run the following command and collect the output: USD ./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug Example error message ... ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded If the output from the command indicates a failure, or if the bootstrap is not progressing, run the following command to connect to the rendezvous host and collect the output: USD ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz Note Red Hat Support can diagnose most issues using the data gathered from the rendezvous host, but if some hosts are not able to register, gathering this data from every host might be helpful. If the bootstrap completes and the cluster nodes reboot, run the following command and collect the output: USD ./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug If the output from the command indicates a failure, perform the following steps: Export the kubeconfig file to your environment by running the following command: USD export KUBECONFIG=<install_directory>/auth/kubeconfig Gather information for debugging by running the following command: USD oc adm must-gather Create a compressed file from the must-gather directory that was just created in your working directory by running the following command: USD tar cvaf must-gather.tar.gz <must_gather_directory> Excluding the /auth subdirectory, attach the installation directory used during the deployment to your support case on the Red Hat Customer Portal . Attach all other data gathered from this procedure to your support case.
[ "sudo dnf install /usr/bin/nmstatectl -y", "mkdir ~/<directory_name>", "cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: '<pull_secret>' 4 sshKey: '<ssh_pub_key>' 5 EOF", "networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5", "cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.2 next-hop-interface: eno1 table-id: 254 EOF", "openshift-install agent create cluster-manifests --dir <installation_directory>", "cd <installation_directory>/cluster-manifests", "cd ../mirror", "openshift-install --dir <install_directory> agent create image", "./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \\ 1 --log-level=info 2", "................................................................ ................................................................ INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete", "openshift-install --dir <install_directory> agent wait-for install-complete 1", "................................................................ ................................................................ INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com", "apiVIP: 192.168.11.3 ingressVIP: 192.168.11.4 clusterDeploymentRef: name: mycluster imageSetRef: name: openshift-4.13 networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes", "apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: test-agent-cluster-install namespace: cluster0 spec: clusterDeploymentRef: name: ostest imageSetRef: name: openshift-4.13 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <ssh_public_key>", "apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: ostest namespace: cluster0 spec: baseDomain: test.metalkube.org clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: test-agent-cluster-install version: v1beta1 clusterName: ostest controlPlaneConfig: servingCertificates: {} platform: agentBareMetal: agentSelector: matchLabels: bla: aaa pullSecretRef: name: pull-secret", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.13 spec: releaseImage: registry.ci.openshift.org/ocp/release:4.13.0-0.nightly-2022-06-06-025509", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: cluster0 spec: clusterRef: name: ostest namespace: cluster0 cpuArchitecture: aarch64 pullSecretRef: name: pull-secret sshAuthorizedKey: <ssh_public_key> nmStateConfigLabelSelector: matchLabels: cluster0-nmstate-label-name: cluster0-nmstate-label-value", "apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.122.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 next-hop-interface: eth0 table-id: 254 interfaces: - name: \"eth0\" macAddress: 52:54:01:aa:aa:a1", "apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret namespace: cluster0 stringData: .dockerconfigjson: <pull_secret>", "./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug", "ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded", "ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz", "./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug", "export KUBECONFIG=<install_directory>/auth/kubeconfig", "oc adm must-gather", "tar cvaf must-gather.tar.gz <must_gather_directory>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_an_on-premise_cluster_with_the_agent-based_installer/installing-with-agent-based-installer
Chapter 2. Attackers and Vulnerabilities
Chapter 2. Attackers and Vulnerabilities To plan and implement a good security strategy, first be aware of some of the issues which determined, motivated attackers exploit to compromise systems. But before detailing these issues, the terminology used when identifying an attacker must be defined. 2.1. A Quick History of Hackers The modern meaning of the term hacker has origins dating back to the 1960s and the Massachusetts Institute of Technology (MIT) Tech Model Railroad Club, which designed train sets of large scale and intricate detail. Hacker was a name used for club members who discovered a clever trick or workaround for a problem. The term hacker has since come to describe everything from computer buffs to gifted programmers. A common trait among most hackers is a willingness to explore in detail how computer systems and networks function with little or no outside motivation. Open source software developers often consider themselves and their colleagues to be hackers, and use the word as a term of respect. Typically, hackers follow a form of the hacker ethic which dictates that the quest for information and expertise is essential, and that sharing this knowledge is the hackers duty to the community. During this quest for knowledge, some hackers enjoy the academic challenges of circumventing security controls on computer systems. For this reason, the press often uses the term hacker to describe those who illicitly access systems and networks with unscrupulous, malicious, or criminal intent. The more accurate term for this type of computer hacker is cracker - a term created by hackers in the mid-1980s to differentiate the two communities. 2.1.1. Shades of Grey Within the community of individuals who find and exploit vulnerabilities in systems and networks are several distinct groups. These groups are often described by the shade of hat that they "wear" when performing their security investigations and this shade is indicative of their intent. The white hat hacker is one who tests networks and systems to examine their performance and determine how vulnerable they are to intrusion. Usually, white hat hackers crack their own systems or the systems of a client who has specifically employed them for the purposes of security auditing. Academic researchers and professional security consultants are two examples of white hat hackers. A black hat hacker is synonymous with a cracker. In general, crackers are less focused on programming and the academic side of breaking into systems. They often rely on available cracking programs and exploit well known vulnerabilities in systems to uncover sensitive information for personal gain or to inflict damage on the target system or network. The grey hat hacker , on the other hand, has the skills and intent of a white hat hacker in most situations but uses his knowledge for less than noble purposes on occasion. A grey hat hacker can be thought of as a white hat hacker who wears a black hat at times to accomplish his own agenda. Grey hat hackers typically subscribe to another form of the hacker ethic, which says it is acceptable to break into systems as long as the hacker does not commit theft or breach confidentiality. Some would argue, however, that the act of breaking into a system is in itself unethical. Regardless of the intent of the intruder, it is important to know the weaknesses a cracker may likely attempt to exploit. The remainder of the chapter focuses on these issues.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/ch-risk
Chapter 6. Upgrading an Operator-based broker deployment
Chapter 6. Upgrading an Operator-based broker deployment The procedures in this section show how to upgrade: The AMQ Broker Operator version, using both the OpenShift command-line interface (CLI) and OperatorHub The broker container image for an Operator-based broker deployment 6.1. Before you begin This section describes some important considerations before you upgrade the Operator and broker container images for an Operator-based broker deployment. Upgrading the Operator using either the OpenShift command-line interface (CLI) or OperatorHub requires cluster administrator privileges for your OpenShift cluster. If you originally used the CLI to install the Operator, you should also use the CLI to upgrade the Operator. If you originally used OperatorHub to install the Operator (that is, it appears under Operators Installed Operators for your project in the OpenShift Container Platform web console), you should also use OperatorHub to upgrade the Operator. For more information about these upgrade methods, see: Section 6.2, "Upgrading the Operator using the CLI" Section 6.3, "Upgrading the Operator using OperatorHub" If the redeliveryDelayMultiplier and the redeliveryCollisionAvoidanceFactor attributes are configured in the main broker CR in a 7.8.x or 7.9.x deployment, the new Operator is unable to reconcile any CR after you upgrade to 7.10.x. The reconcile fails because the data type of both attributes changed from float to string in 7.10.x. You can work around this issue by deleting the redeliveryDelayMultiplier and the redeliveryCollisionAvoidanceFactor attributes from the spec.deploymentPlan.addressSettings.addressSetting element. Then, configure the attributes in the brokerProperties element. For example: spec: ... brokerProperties: - "addressSettings.#.redeliveryMultiplier=2.1" - "addressSettings.#.redeliveryCollisionAvoidanceFactor=1.2" Note In the brokerProperties element, use the redeliveryMultiplier attribute name instead of the redeliveryDelayMultiplier attribute name that you deleted. If you want to deploy the Operator to watch many namespaces, for example to watch all namespaces, you must: Make sure you have backed up all the CRs relating to broker deployments in your cluster. Uninstall the existing Operator. Deploy the 7.10 Operator to watch the namespaces you require. Check all your deployments and recreate if necessary. 6.2. Upgrading the Operator using the CLI The procedures in this section show how to use the OpenShift command-line interface (CLI) to upgrade different versions of the Operator to the latest version available for AMQ Broker 7.10. 6.2.1. Prerequisites You should use the CLI to upgrade the Operator only if you originally used the CLI to install the Operator. If you originally used OperatorHub to install the Operator (that is, the Operator appears under Operators Installed Operators for your project in the OpenShift Container Platform web console), you should use OperatorHub to upgrade the Operator. To learn how to upgrade the Operator using OperatorHub, see Section 6.3, "Upgrading the Operator using OperatorHub" . 6.2.2. Upgrading the Operator using the CLI You can use the OpenShift command-line interface (CLI) to upgrade the Operator to the latest version for AMQ Broker 7.10. Procedure In your web browser, navigate to the Software Downloads page for AMQ Broker 7.10.7 patches . Ensure that the value of the Version drop-down list is set to 7.10.7 and the Releases tab is selected. to AMQ Broker 7.10.7 Operator Installation and Example Files , click Download . Download of the amq-broker-operator-7.10.7-ocp-install-examples.zip compressed archive automatically begins. When the download has completed, move the archive to your chosen installation directory. The following example moves the archive to a directory called ~/broker/operator . USD mkdir ~/broker/operator USD mv amq-broker-operator-7.10.7-ocp-install-examples.zip ~/broker/operator In your chosen installation directory, extract the contents of the archive. For example: USD cd ~/broker/operator USD unzip amq-broker-operator-operator-7.10.7-ocp-install-examples.zip Log in to OpenShift Container Platform as an administrator for the project that contains your existing Operator deployment. USD oc login -u <user> Switch to the OpenShift project in which you want to upgrade your Operator version. USD oc project <project-name> In the deploy directory of the latest Operator archive that you downloaded and extracted, open the operator.yaml file. Note In the operator.yaml file, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. Open the operator.yaml file for your Operator deployment. Check that any non-default values that you specified in your configuration are replicated in the new operator.yaml configuration file. In the new operator.yaml file, the Operator is named controller-manager by default. Replace all instances of controller-manager with amq-broker-operator , which was the name of the Operator in versions, and save the file. For example: spec: ... selector matchLabels name: amq-broker-operator ... Update the CRDs that are included with the Operator. You must update the CRDs before you deploy the Operator. Update the main broker CRD. USD oc apply -f deploy/crds/broker_activemqartemis_crd.yaml Update the address CRD. USD oc apply -f deploy/crds/broker_activemqartemisaddress_crd.yaml Update the scaledown controller CRD. USD oc apply -f deploy/crds/broker_activemqartemisscaledown_crd.yaml Update the security CRD. USD oc apply -f deploy/crds/broker_activemqartemissecurity_crd.yaml If you are upgrading from AMQ Broker Operator 7.10.0 only, delete the Operator and the StatefulSet. By default, the new Operator deletes the StatefulSet to remove custom and Operator metering labels, which were incorrectly added to the StatefulSet selector by the Operator in 7.10.0. When the Operator deletes the StatefulSet, it also deletes the existing broker Pods, which causes a temporary broker outage. If you want to avoid an outage, complete the following steps to delete the Operator and the StatefulSet without deleting the broker Pods. Delete the Operator. USD oc delete -f deploy/operator.yaml Delete the StatefulSet with the --cascade=orphan option to orphan the broker Pods. The orphaned broker Pods continue to run after the StatefulSet is deleted. USD oc delete statefulset <statefulset-name> --cascade=orphan If you are upgrading from AMQ Broker Operator 7.10.0 or 7.10.1, check if your main broker CR has labels called application or ActiveMQArtemis configured in the deploymentPlan.labels attribute. These labels are reserved for the Operator to assign labels to Pods and are not permitted as custom labels after 7.10.1. If these custom labels were configured in the main broker CR, the Operator-assigned labels on the Pods were overwritten by the custom labels. If either of these custom labels are configured in the main broker CR, complete the following steps to restore the correct labels on the Pods and remove the labels from the CR. If you are upgrading from 7.10.0, you deleted the Operator in the step. If you are upgrading from 7.10.1, delete the Operator. USD oc delete -f deploy/operator.yaml Run the following command to restore the correct Pod labels. In the following example, 'ex-aao' is the name of the StatefulSet deployed. USD for pod in USD(oc get pods | grep -o '^ex-aao[^ ]*') do; oc label --overwrite pods USDpod ActiveMQArtemis=ex-aao application=ex-aao-app; done Delete the application and ActiveMQArtemis labels from the deploymentPlan.labels attribute in the CR. Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. In the deploymentPlan.labels attribute in the CR, delete any custom labels called application or ActiveMQArtemis . Save the CR file. Deploy the CR instance. Switch to the project for the broker deployment. Apply the CR. If you deleted the Operator, deploy the new Operator. USD oc create -f deploy/operator.yaml Apply the updated Operator configuration. USD oc apply -f deploy/operator.yaml The new Operator can recognize and manage your broker deployments. If automatic updates are enabled in the CR of your deployment, the Operator's reconciliation process upgrades each broker pod. If automatic updates are not enabled, you can enable them by setting the following attributes in your CR: spec: ... upgrades: enabled: true minor: true For more information on enabling automatic updates, see, Section 6.4, "Upgrading the broker container image by specifying an AMQ Broker version" . Note If the reconciliation process does not start, you can start the process by scaling the deployment. For more information, see Section 3.4.1, "Deploying a basic broker instance" . Add attributes to the CR for the new features that are available in the upgraded broker, as required. 6.3. Upgrading the Operator using OperatorHub This section describes how to use OperatorHub to upgrade the Operator for AMQ Broker. 6.3.1. Prerequisites You should use OperatorHub to upgrade the Operator only if you originally used OperatorHub to install the Operator (that is, the Operator appears under Operators Installed Operators for your project in the OpenShift Container Platform web console). By contrast, if you originally used the OpenShift command-line interface (CLI) to install the Operator, you should also use the CLI to upgrade the Operator. To learn how to upgrade the Operator using the CLI, see Section 6.2, "Upgrading the Operator using the CLI" . Upgrading the AMQ Broker Operator using OperatorHub requires cluster administrator privileges for your OpenShift cluster. 6.3.2. Before you begin This section describes some important considerations before you use OperatorHub to upgrade an instance of the AMQ Broker Operator. The Operator Lifecycle Manager automatically updates the CRDs in your OpenShift cluster when you install the latest Operator version from OperatorHub. You do not need to remove existing CRDs. If you remove existing CRDs, all CRs and broker instances are also removed. When you update your cluster with the CRDs for the latest Operator version, this update affects all projects in the cluster. Any broker Pods deployed from versions of the Operator might become unable to update their status in the OpenShift Container Platform web console. When you click the Logs tab of a running broker Pod, you see messages indicating that 'UpdatePodStatus' has failed. However, the broker Pods and Operator in that project continue to work as expected. To fix this issue for an affected project, you must also upgrade that project to use the latest version of the Operator. The procedure to follow depends on the Operator version that you are upgrading from. Ensure that you follow the upgrade procedure that is for your current version. 6.3.3. Upgrading the Operator from pre-7.10.0 to 7.10.1 or later You can use OperatorHub to upgrade an instance of the Operator from pre-7.10.0 to 7.10.1 or later. Procedure Log in to the OpenShift Container Platform web console as a cluster administrator. Uninstall the existing AMQ Broker Operator from your project. In the left navigation menu, click Operators Installed Operators . From the Project drop-down menu at the top of the page, select the project in which you want to uninstall the Operator. Locate the Red Hat Integration - AMQ Broker instance that you want to uninstall. For your Operator instance, click the More Options icon (three vertical dots) on the right-hand side. Select Uninstall Operator . On the confirmation dialog box, click Uninstall . Use OperatorHub to install the latest version of the Operator for AMQ Broker 7.10. For more information, see Section 3.3.2, "Deploying the Operator from OperatorHub" . If automatic updates are enabled in the CR of your deployment, the Operator's reconciliation process upgrades each broker pod when the new Operator starts. If automatic updates are not enabled, you can enable them by setting the following attributes in your CR: spec: ... upgrades: enabled: true minor: true For more information on enabling automatic updates, see, Section 6.4, "Upgrading the broker container image by specifying an AMQ Broker version" . Note If the reconciliation process does not start, you can start the process by scaling the deployment. For more information, see Section 3.4.1, "Deploying a basic broker instance" . 6.3.4. Upgrading the Operator from 7.10.0 to 7.10.x Use this procedure to upgrade from AMQ Broker Operator 7.10.0. Procedure Log in to the OpenShift Container Platform web console as a cluster administrator. Uninstall the existing AMQ Broker Operator from your project. In the left navigation menu, click Operators Installed Operators . From the Project drop-down menu at the top of the page, select the project in which you want to uninstall the Operator. Locate the Red Hat Integration - AMQ Broker instance that you want to uninstall. For your Operator instance, click the More Options icon (three vertical dots) on the right-hand side. Select Uninstall Operator . On the confirmation dialog box, click Uninstall . When you upgrade a 7.10.0 Operator, the new Operator deletes the StatefulSet to remove custom and Operator metering labels, which were incorrectly added to the StatefulSet selector by the Operator in 7.10.0. When the Operator deletes the StatefulSet, it also deletes the existing broker pods, which causes a temporary broker outage. If you want to avoid the outage, complete the following steps to delete the StatefulSet and orphan the broker pods so that they continue to run. Log in to OpenShift Container Platform CLI as an administrator for the project that contains your existing Operator deployment: USD oc login -u <user> Switch to the OpenShift project in which you want to upgrade your Operator version. USD oc project <project-name> Delete the StatefulSet with the --cascade=orphan option to orphan the broker Pods. The orphaned broker Pods continue to run after the StatefulSet is deleted. USD oc delete statefulset <statefulset-name> --cascade=orphan Check if your main broker CR has labels called application or ActiveMQArtemis configured in the deploymentPlan.labels attribute. In 7.10.0, it was possible to configure these custom labels in the CR. These labels are reserved for the Operator to assign labels to Pods and cannot be added as custom labels after 7.10.0. If these custom labels were configured in the main broker CR in 7.10.0, the Operator-assigned labels on the Pods were overwritten by the custom labels. If the CR has either of these labels, complete the following steps to restore the correct labels on the Pods and remove the labels from the CR. In the OpenShift command-line interface (CLI), run the following command to restore the correct Pod labels. In the following example, 'ex-aao' is the name of the StatefulSet deployed. USD for pod in USD(oc get pods | grep -o '^ex-aao[^ ]*') do; oc label --overwrite pods USDpod ActiveMQArtemis=ex-aao application=ex-aao-app; done Delete the application and ActiveMQArtemis labels from the deploymentPlan.labels attribute in the CR. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. In the deploymentPlan.labels element in the CR, delete any custom labels called application or ActiveMQArtemis . Save the CR file. Deploy the CR instance. Switch to the project for the broker deployment. Apply the CR. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click the instance for your broker deployment. Click the YAML tab. Within the console, a YAML editor opens, enabling you to configure a CR instance. In the deploymentPlan.labels element in the CR, delete any custom labels called application or ActiveMQArtemis . Click Save . Use OperatorHub to install the latest version of the Operator for AMQ Broker 7.10. For more information, see Section 3.3.2, "Deploying the Operator from OperatorHub" . The new Operator can recognize and manage your broker deployments. If automatic updates are enabled in the CR of your deployment, the Operator's reconciliation process upgrades each broker pod when the new Operator starts. If automatic updates are not enabled, you can enable them by setting the following attributes in your CR: spec: ... upgrades: enabled: true minor: true For more information on enabling automatic updates, see, Section 6.4, "Upgrading the broker container image by specifying an AMQ Broker version" . Note If the reconciliation process does not start, you can start the process by scaling the deployment. For more information, see Section 3.4.1, "Deploying a basic broker instance" . Add attributes to the CR for the new features that are available in the upgraded broker, as required. 6.3.5. Upgrading the Operator from 7.10.1 to 7.10.x Use this procedure to upgrade from AMQ Broker Operator 7.10.1. Procedure Log in to the OpenShift Container Platform web console as a cluster administrator. Check if your main broker CR has labels called application or ActiveMQArtemis configured in the deploymentPlan.labels attribute. These labels are reserved for the Operator to assign labels to Pods and cannot be used after 7.10.1. If these custom labels were configured in the main broker CR, the Operator-assigned labels on the Pods were overwritten by the custom labels. If these custom labels are not configured in the main broker CR, use OperatorHub to install the latest version of the Operator for AMQ Broker 7.10. For more information, see Section 3.3.2, "Deploying the Operator from OperatorHub" . If either of these custom labels are configured in the main broker CR, complete the following steps to uninstall the existing Operator, restore the correct Pod labels and remove the labels from the CR, before you install the new Operator. Note By uninstalling the Operator, you can remove the custom labels without the Operator deleting the StatefulSet, which also deletes the existing broker pods and causes a temporary broker outage. Uninstall the existing AMQ Broker Operator from your project. In the left navigation menu, click Operators Installed Operators . From the Project drop-down menu at the top of the page, select the project from which you want to uninstall the Operator. Locate the Red Hat Integration - AMQ Broker instance that you want to uninstall. For your Operator instance, click the More Options icon (three vertical dots) on the right-hand side. Select Uninstall Operator . On the confirmation dialog box, click Uninstall . In the OpenShift command-line interface (CLI), run the following command to restore the correct Pod labels. In the following example, 'ex-aao' is the name of the StatefulSet deployed. USD for pod in USD(oc get pods | grep -o '^ex-aao[^ ]*') do; oc label --overwrite pods USDpod ActiveMQArtemis=ex-aao application=ex-aao-app; done Delete the application and ActiveMQArtemis labels from the deploymentPlan.labels attribute in the CR. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. In the deploymentPlan.labels attribute in the CR, delete any custom labels called application or ActiveMQArtemis . Save the CR file. Deploy the CR instance. Switch to the project for the broker deployment. Apply the CR. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click the instance for your broker deployment. Click the YAML tab. Within the console, a YAML editor opens, enabling you to configure a CR instance. In the deploymentPlan.labels attribute in the CR, delete any custom labels called application or ActiveMQArtemis . Click Save . Use OperatorHub to install the latest version of the Operator for AMQ Broker 7.10. For more information, see Section 3.3.2, "Deploying the Operator from OperatorHub" . The new Operator can recognize and manage your broker deployments. If automatic updates are enabled in the CR of your deployment, the Operator's reconciliation process upgrades each broker pod when the new Operator starts. If automatic updates are not enabled, you can enable them by setting the following attributes in your CR: spec: ... upgrades: enabled: true minor: true For more information on enabling automatic updates, see, Section 6.4, "Upgrading the broker container image by specifying an AMQ Broker version" . Note If the reconciliation process does not start, you can start the process by scaling the deployment. For more information, see Section 3.4.1, "Deploying a basic broker instance" . Add attributes to the CR for the new features that are available in the upgraded broker, as required. 6.4. Upgrading the broker container image by specifying an AMQ Broker version The following procedure shows how to upgrade the broker container image for an Operator-based broker deployment by specifying an AMQ Broker version. You might do this, for example, if you upgrade the Operator to AMQ Broker 7.10.0 but the spec.upgrades.enabled property in your CR is already set to true and the spec.version property specifies 7.9.0 . To upgrade the broker container image, you need to manually specify a new AMQ Broker version (for example, 7.10.0 ). When you specify a new version of AMQ Broker, the Operator automatically chooses the broker container image that corresponds to this version. Prerequisites As described in Section 2.4, "How the Operator chooses container images" , if you deploy a CR and do not explicitly specify a broker container image, the Operator automatically chooses the appropriate container image to use. To use the upgrade process described in this section, you must use this default behavior. If you override the default behavior by directly specifying a broker container image in your CR, the Operator cannot automatically upgrade the broker container image to correspond to an AMQ Broker version as described below. Procedure Edit the main broker CR instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to edit and deploy CRs in the project for the broker deployment. In a text editor, open the CR file that you used for your broker deployment. For example, this might be the broker_activemqartemis_cr.yaml file that was included in the deploy/crs directory of the Operator installation archive that you previously downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to edit and deploy CRs in the project for the broker deployment. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Locate the CR instance that corresponds to your project namespace. For your CR instance, click the More Options icon (three vertical dots) on the right-hand side. Select Edit ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to edit the CR instance. To specify a version of AMQ Broker to which to upgrade the broker container image, set a value for the spec.version property of the CR. For example: spec: version: 7.10.0 ... In the spec section of the CR, locate the upgrades section. If this section is not already included in the CR, add it. spec: version: 7.10.0 ... upgrades: Ensure that the upgrades section includes the enabled and minor properties. spec: version: 7.10.0 ... upgrades: enabled: minor: To enable an upgrade of the broker container image based on a specified version of AMQ Broker, set the value of the enabled property to true . spec: version: 7.10.0 ... upgrades: enabled: true minor: To define the upgrade behavior of the broker, set a value for the minor property. To allow upgrades between minor AMQ Broker versions, set the value of minor to true . spec: version: 7.10.0 ... upgrades: enabled: true minor: true For example, suppose that the current broker container image corresponds to 7.9.0 , and a new image, corresponding to the 7.10.0 version specified for spec.version , is available. In this case, the Operator determines that there is an available upgrade between the 7.9.0 and 7.10.0 minor versions. Based on the preceding settings, which allow upgrades between minor versions, the Operator upgrades the broker container image. By contrast, suppose that the current broker container image corresponds to 7.10.0 , and you specify a new value of 7.10.1 for spec.version . If an image corresponding to 7.10.1 exists, the Operator determines that there is an available upgrade between 7.10.0 and 7.10.1 micro versions. Based on the preceding settings, which allow upgrades only between minor versions, the Operator does not upgrade the broker container image. To allow upgrades between micro AMQ Broker versions, set the value of minor to false . spec: version: 7.10.0 ... upgrades: enabled: true minor: false For example, suppose that the current broker container image corresponds to 7.9.0 , and a new image, corresponding to the 7.10.0 version specified for spec.version , is available. In this case, the Operator determines that there is an available upgrade between the 7.9.0 and 7.10.0 minor versions. Based on the preceding settings, which do not allow upgrades between minor versions (that is, only between micro versions), the Operator does not upgrade the broker container image. By contrast, suppose that the current broker container image corresponds to 7.10.0 , and you specify a new value of 7.10.1 for spec.version . If an image corresponding to 7.10.1 exists, the Operator determines that there is an available upgrade between 7.10.0 and 7.10.1 micro versions. Based on the preceding settings, which allow upgrades between micro versions, the Operator upgrades the broker container image. Apply the changes to the CR. Using the OpenShift command-line interface: Save the CR file. Switch to the project for the broker deployment. Apply the CR. Using the OpenShift web console: When you have finished editing the CR, click Save . When you apply the CR change, the Operator first validates that an upgrade to the AMQ Broker version specified for spec.version is available for your existing deployment. If you have specified an invalid version of AMQ Broker to which to upgrade (for example, a version that is not yet available), the Operator logs a warning message, and takes no further action. However, if an upgrade to the specified version is available, and the values specified for upgrades.enabled and upgrades.minor allow the upgrade, then the Operator upgrades each broker in the deployment to use the broker container image that corresponds to the new AMQ Broker version. The broker container image that the Operator uses is defined in an environment variable in the operator.yaml configuration file of the Operator deployment. The environment variable name includes an identifier for the AMQ Broker version. For example, the environment variable RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_7100 corresponds to AMQ Broker 7.10.7. When the Operator has applied the CR change, it restarts each broker Pod in your deployment so that each Pod uses the specified image version. If you have multiple brokers in your deployment, only one broker Pod shuts down and restarts at a time. Additional resources To learn how the Operator uses environment variables to choose a broker container image, see Section 2.4, "How the Operator chooses container images" .
[ "spec: brokerProperties: - \"addressSettings.#.redeliveryMultiplier=2.1\" - \"addressSettings.#.redeliveryCollisionAvoidanceFactor=1.2\"", "mkdir ~/broker/operator mv amq-broker-operator-7.10.7-ocp-install-examples.zip ~/broker/operator", "cd ~/broker/operator unzip amq-broker-operator-operator-7.10.7-ocp-install-examples.zip", "oc login -u <user>", "oc project <project-name>", "spec: selector matchLabels name: amq-broker-operator", "oc apply -f deploy/crds/broker_activemqartemis_crd.yaml", "oc apply -f deploy/crds/broker_activemqartemisaddress_crd.yaml", "oc apply -f deploy/crds/broker_activemqartemisscaledown_crd.yaml", "oc apply -f deploy/crds/broker_activemqartemissecurity_crd.yaml", "oc delete -f deploy/operator.yaml", "oc delete statefulset <statefulset-name> --cascade=orphan", "oc delete -f deploy/operator.yaml", "for pod in USD(oc get pods | grep -o '^ex-aao[^ ]*') do; oc label --overwrite pods USDpod ActiveMQArtemis=ex-aao application=ex-aao-app; done", "login -u <user> -p <password> --server= <host:port>", "oc project <project_name>", "oc apply -f <path/to/broker_custom_resource_instance> .yaml", "oc create -f deploy/operator.yaml", "oc apply -f deploy/operator.yaml", "spec: upgrades: enabled: true minor: true", "spec: upgrades: enabled: true minor: true", "oc login -u <user>", "oc project <project-name>", "oc delete statefulset <statefulset-name> --cascade=orphan", "for pod in USD(oc get pods | grep -o '^ex-aao[^ ]*') do; oc label --overwrite pods USDpod ActiveMQArtemis=ex-aao application=ex-aao-app; done", "login -u <user> -p <password> --server= <host:port>", "oc project <project_name>", "oc apply -f <path/to/broker_custom_resource_instance> .yaml", "spec: upgrades: enabled: true minor: true", "for pod in USD(oc get pods | grep -o '^ex-aao[^ ]*') do; oc label --overwrite pods USDpod ActiveMQArtemis=ex-aao application=ex-aao-app; done", "login -u <user> -p <password> --server= <host:port>", "oc project <project_name>", "oc apply -f <path/to/broker_custom_resource_instance> .yaml", "spec: upgrades: enabled: true minor: true", "oc login -u <user> -p <password> --server= <host:port>", "spec: version: 7.10.0", "spec: version: 7.10.0 upgrades:", "spec: version: 7.10.0 upgrades: enabled: minor:", "spec: version: 7.10.0 upgrades: enabled: true minor:", "spec: version: 7.10.0 upgrades: enabled: true minor: true", "spec: version: 7.10.0 upgrades: enabled: true minor: false", "oc project <project_name>", "oc apply -f <path/to/broker_custom_resource_instance> .yaml" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/deploying_amq_broker_on_openshift/assembly_br-upgrading-operator-based-broker-deployments_broker-ocp
Chapter 3. Changes to packages, functionality, and support
Chapter 3. Changes to packages, functionality, and support Read this chapter for information about changes to the functionality or to packages provided in Red Hat Enterprise Linux 7, and changes to the support of said packages. 3.1. New Packages This section describes notable packages now available in Red Hat Enterprise Linux 7. 3.1.1. Preupgrade Assistant The Preupgrade Assistant ( preupg ) checks for potential problems you might encounter with an upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 before making any changes to your system. This helps you assess your chances of successfully upgrading to Red Hat Enterprise Linux 7 before the actual upgrade process begins. The Preupgrade Assistant assesses the system for possible in-place upgrade limitations, such as package removals, incompatible obsoletes, name changes, deficiencies in some configuration file compatibilities, and so on. It then provides the following: System analysis report with proposed solutions for any detected migration issues. Data that could be used for "cloning" the system, if the in-place upgrade is not suitable. Post-upgrade scripts to finish more complex issues after the in-place upgrade. Your system remains unchanged except for the information and logs stored by the Preupgrade Assistant . For detailed instructions on how to obtain and use the Preupgrade Assistant , see Assessing upgrade suitability . 3.1.2. Red Hat Upgrade Tool The new Red Hat Upgrade Tool is used after the Preupgrade Assistant , and handles the three phases of the upgrade process: Red Hat Upgrade Tool fetches packages and an upgrade image from a disk or server, prepares the system for the upgrade, and reboots the system. The rebooted system detects that upgrade packages are available and uses systemd and yum to upgrade packages on the system. Red Hat Upgrade Tool cleans up after the upgrade and reboots the system into the upgraded operating system. Both network and disk based upgrades are supported. For detailed instructions on how to upgrade your system, see Chapter 1, How to Upgrade . 3.1.3. Chrony Chrony is a new NTP client provided in the chrony package. It replaces the reference implementation ( ntp ) as the default NTP implementation in Red Hat Enterprise Linux 7. However, it does not support all features available in ntp , so ntp is still provided for compatibility reasons. If you require ntp , you must explicitly remove chrony and install ntp instead. Chrony 's timekeeping algorithms have several advantages over the ntp implementation. Faster, more accurate synchronization. Larger range for frequency correction. Better response to rapid changes in clock frequency. No clock stepping after initial synchronization. Works well with an intermittent network connection. For more information about chrony , see the System Administrator's Guide . 3.1.4. HAProxy HAProxy is a TCP/HTTP reverse proxy that is well-suited to high availability environments. It requires few resources, and its event-driven architecture allows it to easily handle thousands of simultaneous connections on hundreds of instances without risking the stability of the system. For more information about HAProxy , see the man page, or consult the documentation installed along with the haproxy package in the /usr/share/doc/haproxy directory. 3.1.5. Kernel-tools The kernel-tools package includes a number of tools for the Linux kernel. Some tools in this package replace tools previously available in other packages. See Section 3.3, "Deprecated Packages" and Section 3.2, "Package Replacements" for details. 3.1.6. NFQUEUE (libnetfilter_queue) Red Hat Enterprise Linux 7.1 provides the libnetfilter_queue package. This library enables the NFQUEUE iptables target, which specifies that a listening user-space application will retrieve a packet from a specified queue and determine how that packet will be handled. 3.1.7. SCAP Security Guide The scap-security-guide package provides security guidance, baselines, and associated validation mechanisms for the Security Content Automation Protocol (SCAP). Previously, this package was only available through the EPEL repository (Extra Packages for Enterprise Linux). As of Red Hat Enterprise Linux 7.1, scap-security-guide is available in the Red Hat Enterprise Linux 7 Server (RPMS) repository. 3.1.8. Red Hat Access GUI Red Hat Access GUI is a desktop application, which lets you find help, answers, and utilize diagnostic services using Red Hat Knowledgebase, resources, and functionality. If you have an active account on the Red Hat Customer Portal , you can access additional information and tips of the Knowledgebase easily browsable by keywords. Red Hat Access GUI is already installed if you select to have the GNOME Desktop installed. For more information on the benefits, installation, and usage of this tool, see Red Hat Access GUI . 3.2. Package Replacements This section lists packages that have been removed from Red Hat Enterprise Linux between version 6 and version 7 alongside functionally equivalent replacement packages or alternative packages available in Red Hat Enterprise Linux 7. Table 3.1. Replaced packages Removed package Replacement/Alternative Notes vconfig iproute (ip tool) Not fully compatible. module-init-tools kmod openoffice.org libreoffice man man-db ext2 and ext3 filesystem driver ext4 filesystem driver openais corosync Functionality wrapped by the Red Hat Enterprise Linux HA stack. jwhois whois Output format differs. libjpeg libjpeg-turbo gpxe ipxe Fork of gpxe . cpuspeed kernel, kernel-tools (cpupower, cpupower.service) Now configured in /etc/sysconfig/cpupower . No longer includes user-space scaling daemon; use kernel governors if necessary. nc nmap-ncat procps procps-ng openswan libreswan arptables_jf arptables gcj OpenJDK Do not compile Java apps to native code with gcj . 32-bit x86 as installation architecture AMD64 and Intel 64 Applications will still run with compatibility libraries. Test your applications on 64-bit Red Hat Enterprise Linux 6. If 32-bit x86 boot support is required, continue to use Red Hat Enterprise Linux 6. Power 6 PPC support Continue to use Red Hat Enterprise Linux 5 or Red Hat Enterprise Linux 6 Matahari CIM-based management ecryptfs Use existing LUKS/dm-crypt block-based encryption Migration is not available for encrypted file systems; encrypted data must be recreated. evolution-exchange evolution-mapi/evolution-ews TurboGears2 web application stack openmotif22 motif Rebuild applications against the current Motif version. webalizer web anayltics tool Other web analytics tools are superior. compiz window manager gnome-shell Eclipse developer toolset Eclipse is now offered in the Developer Toolset offering. Qpid and QMF Qpid and QMF are available in the MRG offering. amtu Common Criteria certifications no longer require this tool. pidgin frontends empathy perl-suidperl perl This functionality has been removed in upstream perl. pam_passwdqc, pam_cracklib libpwquality, pam_pwquality Not fully compatible. HAL library and daemon udev ConsoleKit library and daemon systemd Not fully compatible. system-config-network nm-connection-editor, nmcli thunderbird evolution system-config-firewall firewalld busybox normal utilities KVM/virt packages (in ComputeNode) KVM/virt equipped variant such as a Server variant abyssinica-fonts sil-abyssinica-fonts axis java-1.7.0-openjdk ccs pcs Not fully compatible. cjkuni-fonts-common cjkuni-uming-fonts classpath-jaf java-1.7.0-openjdk classpath-mail javamail Not fully compatible. cman corosync control-center-extra control-center db4-cxx libdb4-cxx db4-devel libdb4-devel db4-utils libdb4-utils desktop-effects control-center DeviceKit-power upower Not fully compatible. dracut-kernel dracut eggdbus glib2 Not fully compatible. fcoe-target-utils targetcli See Section 2.6.3, "Target Management with targetcli" for details. febootstrap supermin gcc-java java-1.7.0-openjdk-devel GConf2-gtk GConf2 gdm-plugin-fingerprint gdm gdm-plugin-smartcard gdm gdm-user-switch-applet gnome-shell Not fully compatible. geronimo-specs geronimo-parent-poms geronimo-specs-compat geronimo-jms, geronimo-jta Not fully compatible. gimp-help-browser gimp Not fully compatible. gnome-applets gnome-classic-session Not fully compatible. gnome-keyring-devel gnome-keyring gnome-mag gnome-shell Not fully compatible. gnome-python2-applet pygtk2 Not fully compatible. gnome-speech speech-dispatcher Not fully compatible. gpxe-roms-qemu ipxe-roms-qemu hal systemd Not fully compatible. hal-devel systemd-devel Not fully compatible. ibus-gtk ibus-gtk2 ibus-table-cangjie ibus-table-chinese-cangjie ibus-table-erbi ibus-table-chinese-erbi ibus-table-wubi ibus-table-chinese-wubi-haifeng jakarta-commons-net apache-commons-net java-1.5.0-gcj java-1.7.0-openjdk, java-1.7.0-openjdk-headless Not fully compatible. java-1.5.0-gcj-devel java-1.7.0-openjdk-devel Not fully compatible. java-1.5.0-gcj-javadoc java-1.7.0-openjdk-javadoc Not fully compatible. junit4 junit jwhois whois kabi-whitelists kernel-abi-whitelists kdeaccessibility-libs kdeaccessibility kdebase-devel kde-baseapps-devel kdebase-workspace-wallpapers kde-wallpapers kdelibs-experimental kdelibs kdesdk-libs kate-libs, kdesdk-kmtrace-libs, kdesdk-kompare Not fully compatible. kdesdk-utils kdesdk-poxml krb5-auth-dialog gnome-online-accounts Not fully compatible. lldpad-libs lldpad lslk util-linux Not fully compatible. luci pcs See Section 2.8, "Clustering and High Availability" for details. man-pages-uk man-pages mingetty util-linux Not fully compatible. modcluster pcs Not fully compatible. mod_perl mod_fcgid Not compatible with httpd 2.4. m17n-contrib-* m17n-contrib m17n-db-* m17n-db, m17n-db-extras NetworkManager-gnome nm-connection-editor, network-manager, applet nss_db glibc Not fully compatible. openais corosync openaislib corosynclib openaislib-devel corosynclib-devel PackageKit-gtk-module PackageKit-gtk3-module Not fully compatible. polkit-desktop-policy polkit pulseaudio-libs-zeroconf pulseaudio-libs Not fully compatible. qt-sqlite qt rdesktop xfreerdp Red_Hat_Enterprise_Linux-Release_Notes-6-* Red_Hat_Enterprise_Linux-Release_Notes-7-* redhat-lsb-compat redhat-lsb-core rgmanager pacemaker See Section 2.8, "Clustering and High Availability" for details. rhythmbox-upnp rhythmbox ricci pcs See Section 2.8, "Clustering and High Availability" for details. samba4* samba* See Section 2.7.6.3, "Samba" for details. sbm-cim-client sbm-cim-client2 Not fully compatible. scsi-target-utils targetcli See Section 2.6.3, "Target Management with targetcli" for details. seekwatcher iowatcher spice-client virt-viewer Not fully compatible. system-config-lvm gnome-disk-utility Not fully compatible. texlive-* texlive tex-cm-lgc texlive-cm-lgc tex-kerkis texlive-kerkis texlive-texmf-dvips texlive-dvips texlive-texmf-latex texlive-latex tomcat6 tomcat tomcat6-el-2.1-api tomcat-el-2.2-api tomcat6-jsp-2.1-api tomcat-jsp-2.2-api tomcat6-lib tomcat-lib totem-upnp totem udisks udisks2 Not fully compatible. un-core-batang-fonts nhn-nanum-myeongjo-fonts un-core-dinaru-fonts, un-core-graphic-fonts nhn-nanum-gothic-fonts Not fully compatible. un-core-dotum-fonts nhn-nanum-gothic-fonts un-core-fonts-common nhn-nanum-fonts-common Not fully compatible. un-core-gungseo-fonts nhn-nanum-brush-fonts Not fully compatible. un-core-pilgi-fonts nhn-nanum-pen-fonts Not fully compatible. unique unique3, glib2 Not fully compatible. unique-devel unique3-devel Not fully compatible. unix2dos dos2unix vgabios seavgabios-bin w3m text-www-browser Not fully compatible. xmlrpc3-* xmlrpc-* xorg-x11-drv-apm xorg-x11-drv-fbdev, xorg-x11-drv-vesa xorg-x11-drv-ast, xorg-x11-drv-cirrus, xorg-x11-drv-mga xorg-x11-drv-modesetting xorg-x11-drv-ati-firmware linux-firmware xorg-x11-drv-elographics, xorg-x11-drv-glint, xorg-x11-drv-i128, xorg-x11-drv-i740, xorg-x11-drv-mach64, xorg-x11-drv-rendition, xorg-x11-drv-r128, xorg-x11-drv-savage, xorg-x11-drv-siliconmotion, xorg-x11-drv-sis, xorg-x11-drv-sisusb, xorg-x11-drv-s3virge, xorg-x11-drv-tdfx, xorg-x11-drv-trident, xorg-x11-drv-voodoo, xorg-x11-drv-xgi xorg-x11-drv-fbdev, xorg-x11-drv-vesa xorg-x11-drv-nv xorg-x11-drv-nouveau xorg-x11-twm metacity Not fully compatible. xorg-x11-xdm gdm Not fully compatible. yum-plugin-downloadonly yum 3.3. Deprecated Packages The packages listed in this section are considered deprecated as of Red Hat Enterprise Linux 7. These packages still work, and remain supported, but Red Hat no longer recommends their use. Table 3.2. Package deprecations Functionality/Package Alternative Migration Notes ext2 file system support ext3, ext4 ext4 can be used for ext2 and ext3 file systems. sblim-sfcb tog-pegasus Legacy RHN Hosted registration subscription-manager and Subscription Asset Manager acpid systemd evolution-mapi evolution-ews Please migrate from Microsoft Exchange Server 2003 machines gtkhtml3 webkitgtk3 sendmail postfix edac-utils and mcelog rasdaemon libcgroup systemd cgutils will continue to exist in Red Hat Enterprise Linux 7.0 but systemd is evolving capabilities to enable customers to migrate in later releases lvm1 lvm2 lvm2mirror and cmirror lvm2 raid1 3.4. Removed Packages The following packages have been removed from Red Hat Enterprise Linux between version 6 and version 7 and are no longer supported. Some of these packages may have functionally equivalent replacements available; see Section 3.2, "Package Replacements" for details. amtu ant-antlr ant-apache-bcel ant-apache-bsf ant-apache-log4j ant-apache-oro ant-apache-regexp ant-apache-resolver ant-commons-logging ant-commons-net ant-javamail ant-jdepend ant-jsch ant-junit ant-nodeps ant-swing ant-trax apache-jasper apache-tomcat-apis apr-util-ldap arts arts-devel aspell atmel-firmware at-spi at-spi-python audiofile audit-viewer avahi-tools avahi-ui avalon-framework avalon-logkit batik brasero brasero-libs brasero-nautilus bsf busybox b43-fwcutter b43-openfwwf cas cdparanoia cdrdao cjet cloog-ppl cluster-cim cluster-glue cluster-glue-libs cluster-glue-libs-devel clusterlib clusterlib-devel cluster-snmp cman compat-db42 compat-db43 compat-libstdc++-296 compat-libtermcap compat-openmpi compat-openmpi-psm compat-opensm-libs compiz compiz-gnome coreutils-libs cracklib-python cronie-noanacron ctan-cm-lgc-fonts-common ctan-cm-lgc-roman-fonts ctan-cm-lgc-sans-fonts ctan-cm-lgc-typewriter-fonts ctan-kerkis-fonts-common ctan-kerkis-sans-fonts ctan-kerkis-serif-fonts ctapi-common cvs-inetd c2050 c2070 dash dbus-c+ dbus-qt devhelp dmz-cursor-themes dtach dvd+rw-tools eclipse-birt eclipse-callgraph eclipse-cdt eclipse-dtp eclipse-emf eclipse-gef eclipse-changelog eclipse-jdt eclipse-linuxprofilingframework eclipse-mylyn eclipse-mylyn-cdt eclipse-mylyn-java eclipse-mylyn-pde eclipse-mylyn-trac eclipse-mylyn-webtasks eclipse-mylyn-wikitext eclipse-nls eclipse-nls-ar eclipse-nls-bg eclipse-nls-ca eclipse-nls-cs eclipse-nls-da eclipse-nls-de eclipse-nls-el eclipse-nls-es eclipse-nls-et eclipse-nls-fa eclipse-nls-fi eclipse-nls-fr eclipse-nls-he eclipse-nls-hi eclipse-nls-hu eclipse-nls-id eclipse-nls-it eclipse-nls-ja eclipse-nls-ko eclipse-nls-ku eclipse-nls-mn eclipse-nls-nl eclipse-nls-no eclipse-nls-pl eclipse-nls-pt eclipse-nls-pt_BR eclipse-nls-ro eclipse-nls-ru eclipse-nls-sk eclipse-nls-sl eclipse-nls-sq eclipse-nls-sr eclipse-nls-sv eclipse-nls-tr eclipse-nls-uk eclipse-nls-zh eclipse-nls-zh_TW eclipse-oprofile eclipse-pde eclipse-platform eclipse-rcp eclipse-rpm-editor eclipse-rse eclipse-subclipse eclipse-subclipse-graph eclipse-svnkit eclipse-swt eclipse-valgrind ecryptfs-utils evolution-data-server-doc fakechroot fakechroot-libs fence-virt fence-virtd-checkpoint file-devel firstaidkit firstaidkit-engine firstaidkit-gui foghorn fop gamin-devel gamin-python gconfmm26 ggz-base-libs glade3 gnome-disk-utility-libs gnome-disk-utility-ui-libs gnome-doc-utils gnome-doc-utils-stylesheets gnome-games gnome-media gnome-media-libs gnome-pilot gnome-pilot-conduits gnome-power-manager gnome-python2-bugbuddy gnome-python2-extras gnome-python2-gtkhtml2 gnome-python2-libegg gnome-python2-libwnck gnome-python2-rsvg gnome-themes gnome-user-share gnome-vfs2-devel gnome-vfs2-smb graphviz-perl groff gsl-static gstreamer-python gthumb gtk+extra gtkhtml2 gtksourceview2 gtk2-engines guile gvfs-afc gvfs-archive hal-info hal-libs hal-storage-addon htdig hypervkvpd ibus-table-additional icedax icu4j-eclipse ipa-pki-ca-theme ipa-pki-common-theme ipw2100-firmware ipw2200-firmware jakarta-commons-discovery jakarta-commons-el jasper java_cup jdepend jetty-eclipse jsch jzlib kabi-yum-plugins kcoloredit kcoloredit-doc kdeadmin kdeartwork-screensavers kdebase-workspace-akonadi kdebase-workspace-python-applet kdegames kdegraphics kde-i18n-Arabic kde-i18n-Bengali kde-i18n-Brazil kde-i18n-British kde-i18n-Bulgarian kde-i18n-Catalan kde-i18n-Czech kde-i18n-Danish kde-i18n-Dutch kde-i18n-Estonian kde-i18n-Finnish kde-i18n-French kde-i18n-German kde-i18n-Greek kde-i18n-Hebrew kde-i18n-Hindi kde-i18n-Hungarian kde-i18n-Chinese kde-i18n-Chinese-Big5 kde-i18n-Icelandic kde-i18n-Italian kde-i18n-Japanese kde-i18n-Korean kde-i18n-Lithuanian kde-i18n-Norwegian kde-i18n-Norwegian-Nynorsk kde-i18n-Polish kde-i18n-Portuguese kde-i18n-Punjabi kde-i18n-Romanian kde-i18n-Russian kde-i18n-Serbian kde-i18n-Slovak kde-i18n-Slovenian kde-i18n-Spanish kde-i18n-Swedish kde-i18n-Tamil kde-i18n-Turkish kde-i18n-Ukrainian kdelibs-apidocs kdelibs3 kdelibs3-devel kde-l10n-Bengali-India kde-l10n-Frisian kde-l10n-Gujarati kde-l10n-Chhattisgarhi kde-l10n-Kannada kde-l10n-Kashubian kde-l10n-Kurdish kde-l10n-Macedonian kde-l10n-Maithili kde-l10n-Malayalam kde-l10n-Marathi kdemultimedia kdemultimedia-devel kdemultimedia-libs kdenetwork kdesdk kdesdk-libs kdeutils kdewebdev kdewebdev-libs kernel-debug kernel-debug-devel kernel-doc kiconedit kipi-plugins kipi-plugins-libs kmid kmid-common konq-plugins-doc krb5-appl kross-python ksig ksig-doc k3b k3b-common k3b-libs libao-devel libart_lgpl-devel libbonobo-devel libbonoboui-devel libburn libcroco-devel libdc1394 libdiscid libesmtp-devel libexif-devel libgail-gnome libgcj libgcj-devel libgcj-src libglademm24 libglade2-devel libgnomecanvas-devel libgnome-devel libgnomeui-devel libgphoto2-devel libgpod libgsf-devel libgxim libIDL-devel libidn-devel libisofs libitm libldb-devel libmatchbox libmtp libmusicbrainz libmusicbrainz3 libnih liboil libopenraw-gnome libpanelappletmm libproxy-bin libproxy-python libreport-compat libreport-plugin-mailx libreport-plugin-reportuploader librtas (32-bit only) libselinux-ruby libservicelog (32-bit only) libsexy libtalloc-devel libtdb-devel libtevent-devel libtidy libvpd (32-bit only) libwnck libXdmcp-devel log4cpp lpg-java-compat lucene lucene-contrib lx lynx MAKEDEV matchbox-window-manager mcstrans mesa-dri1-drivers min12xxw mod_auth_mysql mod_auth_pgsql mod_authz_ldap mod_dnssd mrtg-libs mvapich-psm-static mx4j nspluginwrapper openct openhpi-subagent openssh-askpass ORBit2-devel osutil oxygen-cursor-themes PackageKit-yum-plugin paktype-fonts-common pam_passwdqc pbm2l2030 pbm2l7k pcmciautils pcsc-lite-openct perl-BSD-Resource perl-Cache-Memcached perl-Class-MethodMaker perl-Config-General perl-Crypt-PasswdMD5 perl-Frontier-RPC perl-Frontier-RPC-doc perl-Perlilog perl-String-CRC32 perl-suidperl perl-Text-Iconv perl-Time-HiRes perl-YAML-Syck pessulus pilot-link pinentry-gtk piranha pki-symkey plpa-libs plymouth-gdm-hooks plymouth-theme-rings plymouth-utils policycoreutils-newrole policycoreutils-sandbox ppl prelink printer-filters psutils ptouch-driver pulseaudio-module-gconf pycairo-devel pygobject2-codegen pygobject2-devel pygobject2-doc pygtksourceview pygtk2-codegen pygtk2-devel pygtk2-doc pychart PyOpenGL [1] python-beaker python-Coherence python-crypto python-decoratortools python-enchant python-formencode python-fpconst python-genshi python-gtkextra python-cheetah python-ipaddr python-iwlib python-libguestfs [2] python-louie python-mako python-markdown python-markupsafe python-matplotlib python-myghty python-paramiko python-paste python-paste-deploy python-paste-script python-peak-rules python-peak-util-addons python-peak-util-assembler python-peak-util-extremes python-peak-util-symbols python-prioritized-methods python-pygments python-pylons python-qpid python-qpid-qmf python-repoze-tm2 python-repoze-what python-repoze-what-plugins-sql python-repoze-what-pylons python-repoze-what-quickstart python-repoze-who python-repoze-who-friendlyform python-repoze-who-plugins-sa python-repoze-who-testutil python-routes python-saslwrapper python-sexy python-sqlalchemy python-tempita python-toscawidgets python-transaction python-turbojson python-tw-forms python-twisted python-twisted-conch python-twisted-core python-twisted-lore python-twisted-mail python-twisted-names python-twisted-news python-twisted-runner python-twisted-web python-twisted-words python-weberror python-webflash python-webhelpers python-webob python-webtest python-zope-filesystem python-zope-interface python-zope-sqlalchemy pywebkitgtk pyxf86config qpid-cpp-client qpid-cpp-client-ssl qpid-cpp-server qpid-cpp-server-ssl qpid-qmf qpid-tests qpid-tools qt-doc raptor rgmanager rome ruby-devel ruby-qpid ruby-qpid-qmf sabayon sabayon-apply sac samba-winbind-clients samba4 samba4-client samba4-common samba4-dc samba4-dc-libs samba4-devel samba4-pidl samba4-swat samba4-test samba4-winbind samba4-winbind-clients samba4-winbind-krb5-locator saslwrapper sat4j saxon sblim-cmpi-dhcp sblim-cmpi-dns sblim-cmpi-samba sblim-tools-libra scenery-backgrounds seabios selinux-policy-minimum selinux-policy-mls setools-console sgabios-bin sigar sinjdoc smp_utils SOAPpy sound-juicer strigi-devel subscription-manager-migration-data subversion-javahl svnkit system-config-firewall system-config-firewall-tui system-config-network-tui system-config-services system-config-services-docs system-gnome-theme system-icon-theme taskjuggler tbird terminus-fonts tidy tigervnc-server tix tkinter trilead-ssh2 tsclient tunctl TurboGears2 unicap vorbis-tools wacomexpresskeys wdaemon webalizer webkitgtk ws-commons-util wsdl4j xfig-plain xfsprogs-devel xfsprogs-qa-devel xguest xmldb-api xmldb-api-sdk xmlgraphics-commons xorg-x11-apps xorg-x11-drv-acecad xorg-x11-drv-aiptek xorg-x11-drv-fpit xorg-x11-drv-hyperpen xorg-x11-drv-keyboard xorg-x11-drv-mouse xorg-x11-drv-mutouch xorg-x11-drv-openchrome xorg-x11-drv-penmount xorg-x11-server-Xephyr xsane xz-lzma-compat zd1211-firmware 3.5. Removed Drivers The following drivers have been removed from Red Hat Enterprise Linux between version 6 and version 7 and are no longer supported. 3c574_cs.ko 3c589_cs.ko 3c59x.ko 8390.ko acenic.ko amd8111e.ko avma1_cs-ko [3] avm_cs.ko axnet_cs.ko b1pcmpcia.ko bluecard_cs-ko bt3c_cs.ko btuart_cs.ko can-dev.ko cassini.ko cdc-phonet.ko cm4000_cs.ko cm4040_cs.ko cxgb.ko de2104x.ko de4x5.ko dl2k.ko dmfe.ko dtl1_cs.ko e100.ko elsa_cs.ko ems_pci.ko ems_usb.ko fealnx.ko fmvj18x_cs.ko forcedeth.ko ipwireless.ko ixgb.ko kvaser_pci.ko myri10ge.ko natsemi.ko ne2k-pci.ko niu.ko nmclan_cs.ko ns83820.ko parport_cs.ko pata_pcmcia.ko pcnet_cs.ko pcnet32.ko pppol2tp.ko r6040.ko s2io.ko sc92031.ko sdricoh_cs.ko sedlbauer_cs.ko serial_cs.ko sis190.ko sis900.ko sja1000_platform.ko sja1000.ko smc91c92_cs.ko starfire.ko sundance.ko sungem_phy.ko sungem.ko sunhme.ko tehuti.ko teles_cs.ko tlan.ko tulip.ko typhoon.ko uli526x.ko vcan.ko via-rhine.ko via-velocity.ko vxge.ko winbond-840.ko xirc2ps_cs.ko xircom_cb.ko 3.6. Deprecated Drivers For information about deprecated drivers in Red Hat Enterprise Linux 7, see the most recent version of Release Notes on the Red Hat Customer Portal . [1] Removed in Red Hat Enterprise Linux 7.0, replaced in Red Hat Enterprise Linux 7.1. Added to Optional channel in Red Hat Enterprise Linux 7.3. For more information about Optinal channel, see this solution article . [2] Moved to the Optional repository for Red Hat Enterprise Linux 7.0, back in the base channel since Red Hat Enterprise Linux 7.1. [3] The PCMCIA is not supported in Red Hat Enterprise Linux 7. It has been superseded by new technologies, including USB.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/migration_planning_guide/chap-red_hat_enterprise_linux-migration_planning_guide-changes_to_packages_functionality_and_support
Chapter 20. Deprecated Functionality
Chapter 20. Deprecated Functionality This chapter provides an overview of functionality that has been deprecated, or in some cases removed, in all minor releases up to Red Hat Enterprise Linux 6.10. Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 6. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. For the most recent list of deprecated functionality within a particular major release, refer to the latest version of release documentation. Deprecated hardware components are not recommended for new deployments on the current or future major releases. Hardware driver updates are limited to security and critical fixes only. Red Hat recommends replacing this hardware as soon as reasonably feasible. A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from a product.Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. TLS compression support has been removed from nss To prevent security risks, such as the CRIME attack, support for TLS compression in the NSS library has been removed for all TLS versions. This change preserves the API compatibility. Changes in public web CAs trust In addition to the regular trust removals and additions that occur in updated versions of Mozilla's CA list, Mozilla has decided to stop maintaining a part of the CA trust list that the recent versions of Mozilla software no longer require. All CAs that Mozilla had previously declared as trusted to issue code signing certificates, have had that trust attribute removed. Because Red Hat provides Mozilla's CA trust list at the operating system level and is used by many applications, some environments might potentially use software that depends on the code signing trust attribute to be set for CAs. To provide backwards compatibility for applications that require it, the ca-certificates package keeps the code signing trust attribute for several CAs, depending on the ca-legacy configuration. If the default ca-legacy configuration is active, and if a CA certificate continues to be trusted by Mozilla for issuing server authentication certificates, and that CA had been previously trusted by Mozilla for issuing code signing certificates, then the ca-certificates package configures that CA as still trusted for issuing code signing certificates. If the system administrator uses the ca-legacy disable command to disable the legacy compatibility configuration, then the unmodified Mozilla CA list will be used by the system, and none of the CA certificates provided by the ca-certificates package will be trusted for issuing code signing certificates. Both ipt and xt actions deprecated from iproute Due to various unresolved issues and design flaws, both ipt and xt actions have been dropped from the iproute in Red Hat Enterprise Linux 6. Deprecated Drivers Deprecated device drivers 3w-9xxx 3w-sas 3w-xxxx aic7xxx i2o ips megaraid_mbox mptbase mptctl mptfc mptlan mptsas mptscsih mptspi sym53c8xx qla3xxx The following controllers from the megaraid_sas driver have been deprecated: Dell PERC5, PCI ID 0x15 SAS1078R, PCI ID 0x60 SAS1078DE, PCI ID 0x7C SAS1064R, PCI ID 0x411 VERDE_ZCR, PCI ID 0x413 SAS1078GEN2, PCI ID 0x78 The following controllers from the be2iscsi driver have been deprecated: BE_DEVICE_ID1, PCI ID 0x212 OC_DEVICE_ID1, PCI ID 0x702 OC_DEVICE_ID2, PCI ID 0x703 Note that other controllers from the mentioned drivers that are not listed here remain unchanged. Other Deprecated Components cluster , luci components The fence_sanlock agent and checkquorum.wdmd , introduced in Red Hat Enterprise Linux 6.4 as a Technology Preview and providing mechanisms to trigger the recovery of a node using a hardware watchdog device, are considered deprecated. openswan component The openswan packages have been deprecated, and libreswan packages have been introduced as a direct replacement for openswan to provide the VPN endpoint solution. openswan is replaced by libreswan during the system upgrade. seabios component Native KVM support for the S3 (suspend to RAM) and S4 (suspend to disk) power management states has been discontinued. This feature was previously available as a Technology Preview. The zerombr yes Kickstart command is deprecated In some earlier versions of Red Hat Enterprise Linux, the zerombr yes command was used to initialize any invalid partition tables during a Kickstart installation. This was inconsistent with the rest of the Kickstart commands due to requiring two words while all other commands require one. Starting with Red Hat Enterprise Linux 6.7, specifying only zerombr in your Kickstart file is sufficient, and the old two-word form is deprecated. Btrfs file system B-tree file system (Btrfs) is considered deprecated for Red Hat Enterprise Linux 6. Btrfs was previously provided as a Technology Preview, available on AMD64 and Intel 64 architectures. eCryptfs file system eCryptfs file system, which was previously available as a Technology Preview, is considered deprecated for Red Hat Enterprise Linux 6. mingw component Following the deprecation of Matahari packages in Red Hat Enterprise Linux 6.3, at which time the mingw packages were noted as deprecated, and the subsequent removal of Matahari packages from Red Hat Enterprise Linux 6.4, the mingw packages were removed from Red Hat Enterprise Linux 6.6 and later. The mingw packages are no longer shipped in Red Hat Enterprise Linux 6 minor releases, nor will they receive security-related updates. Consequently, users are advised to uninstall any earlier releases of the mingw packages from their Red Hat Enterprise Linux 6 systems. virtio-win component, BZ# 1001981 The VirtIO SCSI driver is no longer supported on Microsoft Windows Server 2003 platform. fence-agents component Prior to Red Hat Enterprise Linux 6.5 release, the Red Hat Enterprise Linux High Availability Add-On was considered fully supported on certain VMware ESXi/vCenter versions in combination with the fence_scsi fence agent. Due to limitations in these VMware platforms in the area of SCSI-3 persistent reservations, the fence_scsi fencing agent is no longer supported on any version of the Red Hat Enterprise Linux High Availability Add-On in VMware virtual machines, except when using iSCSI-based storage. See the Virtualization Support Matrix for High Availability for full details on supported combinations: https://access.redhat.com/site/articles/29440 . Users using fence_scsi on an affected combination can contact Red Hat Global Support Services for assistance in evaluating alternative configurations or for additional information. systemtap component The systemtap-grapher package has been removed from Red Hat Enterprise Linux 6. For more information, see https://access.redhat.com/solutions/757983 . matahari component The Matahari agent framework ( matahari-* ) packages have been removed from Red Hat Enterprise Linux 6. Focus for remote systems management has shifted towards the use of the CIM infrastructure. This infrastructure relies on an already existing standard which provides a greater degree of interoperability for all users. distribution component The following packages have been deprecated and are subjected to removal in a future release of Red Hat Enterprise Linux 6. These packages will not be updated in the Red Hat Enterprise Linux 6 repositories and customers who do not use the MRG-Messaging product are advised to uninstall them from their system. python-qmf python-qpid qpid-cpp qpid-qmf qpid-tests qpid-tools ruby-qpid saslwrapper Red Hat MRG-Messaging customers will continue to receive updated functionality as part of their regular updates to the product. fence-virt component The libvirt-qpid is no longer part of the fence-virt package. openscap component The openscap-perl subpackage has been removed from openscap .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/chap-red_hat_enterprise_linux-6.10_technical_notes-deprecated_functionality
Chapter 12. Monitoring bare-metal events with the Bare Metal Event Relay
Chapter 12. Monitoring bare-metal events with the Bare Metal Event Relay Important Bare Metal Event Relay is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 12.1. About bare-metal events Use the Bare Metal Event Relay to subscribe applications that run in your OpenShift Container Platform cluster to events that are generated on the underlying bare-metal host. The Redfish service publishes events on a node and transmits them on an advanced message queue to subscribed applications. Bare-metal events are based on the open Redfish standard that is developed under the guidance of the Distributed Management Task Force (DMTF). Redfish provides a secure industry-standard protocol with a REST API. The protocol is used for the management of distributed, converged or software-defined resources and infrastructure. Hardware-related events published through Redfish includes: Breaches of temperature limits Server status Fan status Begin using bare-metal events by deploying the Bare Metal Event Relay Operator and subscribing your application to the service. The Bare Metal Event Relay Operator installs and manages the lifecycle of the Redfish bare-metal event service. Note The Bare Metal Event Relay works only with Redfish-capable devices on single-node clusters provisioned on bare-metal infrastructure. 12.2. How bare-metal events work The Bare Metal Event Relay enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure. These hardware events are delivered using an HTTP transport or AMQP mechanism. The latency of the messaging service is between 10 to 20 milliseconds. The Bare Metal Event Relay provides a publish-subscribe service for the hardware events. Applications can use a REST API to subscribe to the events. The Bare Metal Event Relay supports hardware that complies with Redfish OpenAPI v1.8 or later. 12.2.1. Bare Metal Event Relay data flow The following figure illustrates an example bare-metal events data flow: Figure 12.1. Bare Metal Event Relay data flow 12.2.1.1. Operator-managed pod The Operator uses custom resources to manage the pod containing the Bare Metal Event Relay and its components using the HardwareEvent CR. 12.2.1.2. Bare Metal Event Relay At startup, the Bare Metal Event Relay queries the Redfish API and downloads all the message registries, including custom registries. The Bare Metal Event Relay then begins to receive subscribed events from the Redfish hardware. The Bare Metal Event Relay enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure. The events are reported using the HardwareEvent CR. 12.2.1.3. Cloud native event Cloud native events (CNE) is a REST API specification for defining the format of event data. 12.2.1.4. CNCF CloudEvents CloudEvents is a vendor-neutral specification developed by the Cloud Native Computing Foundation (CNCF) for defining the format of event data. 12.2.1.5. HTTP transport or AMQP dispatch router The HTTP transport or AMQP dispatch router is responsible for the message delivery service between publisher and subscriber. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . 12.2.1.6. Cloud event proxy sidecar The cloud event proxy sidecar container image is based on the O-RAN API specification and provides a publish-subscribe event framework for hardware events. 12.2.2. Redfish message parsing service In addition to handling Redfish events, the Bare Metal Event Relay provides message parsing for events without a Message property. The proxy downloads all the Redfish message registries including vendor specific registries from the hardware when it starts. If an event does not contain a Message property, the proxy uses the Redfish message registries to construct the Message and Resolution properties and add them to the event before passing the event to the cloud events framework. This service allows Redfish events to have smaller message size and lower transmission latency. 12.2.3. Installing the Bare Metal Event Relay using the CLI As a cluster administrator, you can install the Bare Metal Event Relay Operator by using the CLI. Prerequisites A cluster that is installed on bare-metal hardware with nodes that have a RedFish-enabled Baseboard Management Controller (BMC). Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the Bare Metal Event Relay. Save the following YAML in the bare-metal-events-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-bare-metal-events labels: name: openshift-bare-metal-events openshift.io/cluster-monitoring: "true" Create the Namespace CR: USD oc create -f bare-metal-events-namespace.yaml Create an Operator group for the Bare Metal Event Relay Operator. Save the following YAML in the bare-metal-events-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: bare-metal-event-relay-group namespace: openshift-bare-metal-events spec: targetNamespaces: - openshift-bare-metal-events Create the OperatorGroup CR: USD oc create -f bare-metal-events-operatorgroup.yaml Subscribe to the Bare Metal Event Relay. Save the following YAML in the bare-metal-events-sub.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: bare-metal-event-relay-subscription namespace: openshift-bare-metal-events spec: channel: "stable" name: bare-metal-event-relay source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR: USD oc create -f bare-metal-events-sub.yaml Verification To verify that the Bare Metal Event Relay Operator is installed, run the following command: USD oc get csv -n openshift-bare-metal-events -o custom-columns=Name:.metadata.name,Phase:.status.phase 12.2.4. Installing the Bare Metal Event Relay using the web console As a cluster administrator, you can install the Bare Metal Event Relay Operator using the web console. Prerequisites A cluster that is installed on bare-metal hardware with nodes that have a RedFish-enabled Baseboard Management Controller (BMC). Log in as a user with cluster-admin privileges. Procedure Install the Bare Metal Event Relay using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Bare Metal Event Relay from the list of available Operators, and then click Install . On the Install Operator page, select or create a Namespace , select openshift-bare-metal-events , and then click Install . Verification Optional: You can verify that the Operator installed successfully by performing the following check: Switch to the Operators Installed Operators page. Ensure that Bare Metal Event Relay is listed in the project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the project namespace. 12.3. Installing the AMQ messaging bus To pass Redfish bare-metal event notifications between publisher and subscriber on a node, you can install and configure an AMQ messaging bus to run locally on the node. You do this by installing the AMQ Interconnect Operator for use in the cluster. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Install the AMQ Interconnect Operator to its own amq-interconnect namespace. See Installing the AMQ Interconnect Operator . Verification Verify that the AMQ Interconnect Operator is available and the required pods are running: USD oc get pods -n amq-interconnect Example output NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h Verify that the required bare-metal-event-relay bare-metal event producer pod is running in the openshift-bare-metal-events namespace: USD oc get pods -n openshift-bare-metal-events Example output NAME READY STATUS RESTARTS AGE hw-event-proxy-operator-controller-manager-74d5649b7c-dzgtl 2/2 Running 0 25s 12.4. Subscribing to Redfish BMC bare-metal events for a cluster node You can subscribe to Redfish BMC events generated on a node in your cluster by creating a BMCEventSubscription custom resource (CR) for the node, creating a HardwareEvent CR for the event, and creating a Secret CR for the BMC. 12.4.1. Subscribing to bare-metal events You can configure the baseboard management controller (BMC) to send bare-metal events to subscribed applications running in an OpenShift Container Platform cluster. Example Redfish bare-metal events include an increase in device temperature, or removal of a device. You subscribe applications to bare-metal events using a REST API. Important You can only create a BMCEventSubscription custom resource (CR) for physical hardware that supports Redfish and has a vendor interface set to redfish or idrac-redfish . Note Use the BMCEventSubscription CR to subscribe to predefined Redfish events. The Redfish standard does not provide an option to create specific alerts and thresholds. For example, to receive an alert event when an enclosure's temperature exceeds 40deg Celsius, you must manually configure the event according to the vendor's recommendations. Perform the following procedure to subscribe to bare-metal events for the node using a BMCEventSubscription CR. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Get the user name and password for the BMC. Deploy a bare-metal node with a Redfish-enabled Baseboard Management Controller (BMC) in your cluster, and enable Redfish events on the BMC. Note Enabling Redfish events on specific hardware is outside the scope of this information. For more information about enabling Redfish events for your specific hardware, consult the BMC manufacturer documentation. Procedure Confirm that the node hardware has the Redfish EventService enabled by running the following curl command: USD curl https://<bmc_ip_address>/redfish/v1/EventService --insecure -H 'Content-Type: application/json' -u "<bmc_username>:<password>" where: bmc_ip_address is the IP address of the BMC where the Redfish events are generated. Example output { "@odata.context": "/redfish/v1/USDmetadata#EventService.EventService", "@odata.id": "/redfish/v1/EventService", "@odata.type": "#EventService.v1_0_2.EventService", "Actions": { "#EventService.SubmitTestEvent": { "[email protected]": ["StatusChange", "ResourceUpdated", "ResourceAdded", "ResourceRemoved", "Alert"], "target": "/redfish/v1/EventService/Actions/EventService.SubmitTestEvent" } }, "DeliveryRetryAttempts": 3, "DeliveryRetryIntervalSeconds": 30, "Description": "Event Service represents the properties for the service", "EventTypesForSubscription": ["StatusChange", "ResourceUpdated", "ResourceAdded", "ResourceRemoved", "Alert"], "[email protected]": 5, "Id": "EventService", "Name": "Event Service", "ServiceEnabled": true, "Status": { "Health": "OK", "HealthRollup": "OK", "State": "Enabled" }, "Subscriptions": { "@odata.id": "/redfish/v1/EventService/Subscriptions" } } Get the Bare Metal Event Relay service route for the cluster by running the following command: USD oc get route -n openshift-bare-metal-events Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hw-event-proxy hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com hw-event-proxy-service 9087 edge None Create a BMCEventSubscription resource to subscribe to the Redfish events: Save the following YAML in the bmc_sub.yaml file: apiVersion: metal3.io/v1alpha1 kind: BMCEventSubscription metadata: name: sub-01 namespace: openshift-machine-api spec: hostName: <hostname> 1 destination: <proxy_service_url> 2 context: '' 1 Specifies the name or UUID of the worker node where the Redfish events are generated. 2 Specifies the bare-metal event proxy service, for example, https://hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com/webhook . Create the BMCEventSubscription CR: USD oc create -f bmc_sub.yaml Optional: To delete the BMC event subscription, run the following command: USD oc delete -f bmc_sub.yaml Optional: To manually create a Redfish event subscription without creating a BMCEventSubscription CR, run the following curl command, specifying the BMC username and password. USD curl -i -k -X POST -H "Content-Type: application/json" -d '{"Destination": "https://<proxy_service_url>", "Protocol" : "Redfish", "EventTypes": ["Alert"], "Context": "root"}' -u <bmc_username>:<password> 'https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions' -v where: proxy_service_url is the bare-metal event proxy service, for example, https://hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com/webhook . bmc_ip_address is the IP address of the BMC where the Redfish events are generated. Example output HTTP/1.1 201 Created Server: AMI MegaRAC Redfish Service Location: /redfish/v1/EventService/Subscriptions/1 Allow: GET, POST Access-Control-Allow-Origin: * Access-Control-Expose-Headers: X-Auth-Token Access-Control-Allow-Headers: X-Auth-Token Access-Control-Allow-Credentials: true Cache-Control: no-cache, must-revalidate Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json>; rel=describedby Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json> Link: </redfish/v1/EventService/Subscriptions>; path= ETag: "1651135676" Content-Type: application/json; charset=UTF-8 OData-Version: 4.0 Content-Length: 614 Date: Thu, 28 Apr 2022 08:47:57 GMT 12.4.2. Querying Redfish bare-metal event subscriptions with curl Some hardware vendors limit the amount of Redfish hardware event subscriptions. You can query the number of Redfish event subscriptions by using curl . Prerequisites Get the user name and password for the BMC. Deploy a bare-metal node with a Redfish-enabled Baseboard Management Controller (BMC) in your cluster, and enable Redfish hardware events on the BMC. Procedure Check the current subscriptions for the BMC by running the following curl command: USD curl --globoff -H "Content-Type: application/json" -k -X GET --user <bmc_username>:<password> https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions where: bmc_ip_address is the IP address of the BMC where the Redfish events are generated. Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 435 100 435 0 0 399 0 0:00:01 0:00:01 --:--:-- 399 { "@odata.context": "/redfish/v1/USDmetadata#EventDestinationCollection.EventDestinationCollection", "@odata.etag": "" 1651137375 "", "@odata.id": "/redfish/v1/EventService/Subscriptions", "@odata.type": "#EventDestinationCollection.EventDestinationCollection", "Description": "Collection for Event Subscriptions", "Members": [ { "@odata.id": "/redfish/v1/EventService/Subscriptions/1" }], "[email protected]": 1, "Name": "Event Subscriptions Collection" } In this example, a single subscription is configured: /redfish/v1/EventService/Subscriptions/1 . Optional: To remove the /redfish/v1/EventService/Subscriptions/1 subscription with curl , run the following command, specifying the BMC username and password: USD curl --globoff -L -w "%{http_code} %{url_effective}\n" -k -u <bmc_username>:<password >-H "Content-Type: application/json" -d '{}' -X DELETE https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions/1 where: bmc_ip_address is the IP address of the BMC where the Redfish events are generated. 12.4.3. Creating the bare-metal event and Secret CRs To start using bare-metal events, create the HardwareEvent custom resource (CR) for the host where the Redfish hardware is present. Hardware events and faults are reported in the hw-event-proxy logs. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have installed the Bare Metal Event Relay. You have created a BMCEventSubscription CR for the BMC Redfish hardware. Procedure Create the HardwareEvent custom resource (CR): Note Multiple HardwareEvent resources are not permitted. Save the following YAML in the hw-event.yaml file: apiVersion: "event.redhat-cne.org/v1alpha1" kind: "HardwareEvent" metadata: name: "hardware-event" spec: nodeSelector: node-role.kubernetes.io/hw-event: "" 1 logLevel: "debug" 2 msgParserTimeout: "10" 3 1 Required. Use the nodeSelector field to target nodes with the specified label, for example, node-role.kubernetes.io/hw-event: "" . Note In OpenShift Container Platform 4.13 or later, you do not need to set the spec.transportHost field in the HardwareEvent resource when you use HTTP transport for bare-metal events. Set transportHost only when you use AMQP transport for bare-metal events. 2 Optional. The default value is debug . Sets the log level in hw-event-proxy logs. The following log levels are available: fatal , error , warning , info , debug , trace . 3 Optional. Sets the timeout value in milliseconds for the Message Parser. If a message parsing request is not responded to within the timeout duration, the original hardware event message is passed to the cloud native event framework. The default value is 10. Apply the HardwareEvent CR in the cluster: USD oc create -f hardware-event.yaml Create a BMC username and password Secret CR that enables the hardware events proxy to access the Redfish message registry for the bare-metal host. Save the following YAML in the hw-event-bmc-secret.yaml file: apiVersion: v1 kind: Secret metadata: name: redfish-basic-auth type: Opaque stringData: 1 username: <bmc_username> password: <bmc_password> # BMC host DNS or IP address hostaddr: <bmc_host_ip_address> 1 Enter plain text values for the various items under stringData . Create the Secret CR: USD oc create -f hw-event-bmc-secret.yaml Additional resources Persistent storage using local volumes 12.5. Subscribing applications to bare-metal events REST API reference Use the bare-metal events REST API to subscribe an application to the bare-metal events that are generated on the parent node. Subscribe applications to Redfish events by using the resource address /cluster/node/<node_name>/redfish/event , where <node_name> is the cluster node running the application. Deploy your cloud-event-consumer application container and cloud-event-proxy sidecar container in a separate application pod. The cloud-event-consumer application subscribes to the cloud-event-proxy container in the application pod. Use the following API endpoints to subscribe the cloud-event-consumer application to Redfish events posted by the cloud-event-proxy container at http://localhost:8089/api/ocloudNotifications/v1/ in the application pod: /api/ocloudNotifications/v1/subscriptions POST : Creates a new subscription GET : Retrieves a list of subscriptions /api/ocloudNotifications/v1/subscriptions/<subscription_id> PUT : Creates a new status ping request for the specified subscription ID /api/ocloudNotifications/v1/health GET : Returns the health status of ocloudNotifications API Note 9089 is the default port for the cloud-event-consumer container deployed in the application pod. You can configure a different port for your application as required. api/ocloudNotifications/v1/subscriptions HTTP method GET api/ocloudNotifications/v1/subscriptions Description Returns a list of subscriptions. If subscriptions exist, a 200 OK status code is returned along with the list of subscriptions. Example API response [ { "id": "ca11ab76-86f9-428c-8d3a-666c24e34d32", "endpointUri": "http://localhost:9089/api/ocloudNotifications/v1/dummy", "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32", "resource": "/cluster/node/openshift-worker-0.openshift.example.com/redfish/event" } ] HTTP method POST api/ocloudNotifications/v1/subscriptions Description Creates a new subscription. If a subscription is successfully created, or if it already exists, a 201 Created status code is returned. Table 12.1. Query parameters Parameter Type subscription data Example payload { "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions", "resource": "/cluster/node/openshift-worker-0.openshift.example.com/redfish/event" } api/ocloudNotifications/v1/subscriptions/<subscription_id> HTTP method GET api/ocloudNotifications/v1/subscriptions/<subscription_id> Description Returns details for the subscription with ID <subscription_id> Table 12.2. Query parameters Parameter Type <subscription_id> string Example API response { "id":"ca11ab76-86f9-428c-8d3a-666c24e34d32", "endpointUri":"http://localhost:9089/api/ocloudNotifications/v1/dummy", "uriLocation":"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32", "resource":"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event" } api/ocloudNotifications/v1/health/ HTTP method GET api/ocloudNotifications/v1/health/ Description Returns the health status for the ocloudNotifications REST API. Example API response OK 12.6. Migrating consumer applications to use HTTP transport for PTP or bare-metal events If you have previously deployed PTP or bare-metal events consumer applications, you need to update the applications to use HTTP message transport. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have updated the PTP Operator or Bare Metal Event Relay to version 4.13+ which uses HTTP transport by default. Procedure Update your events consumer application to use HTTP transport. Set the http-event-publishers variable for the cloud event sidecar deployment. For example, in a cluster with PTP events configured, the following YAML snippet illustrates a cloud event sidecar deployment: containers: - name: cloud-event-sidecar image: cloud-event-sidecar args: - "--metrics-addr=127.0.0.1:9091" - "--store-path=/store" - "--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043" - "--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043" 1 - "--api-port=8089" 1 The PTP Operator automatically resolves NODE_NAME to the host that is generating the PTP events. For example, compute-1.example.com . In a cluster with bare-metal events configured, set the http-event-publishers field to hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043 in the cloud event sidecar deployment CR. Deploy the consumer-events-subscription-service service alongside the events consumer application. For example: apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-bare-metal-events labels: name: openshift-bare-metal-events openshift.io/cluster-monitoring: \"true\"", "oc create -f bare-metal-events-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: bare-metal-event-relay-group namespace: openshift-bare-metal-events spec: targetNamespaces: - openshift-bare-metal-events", "oc create -f bare-metal-events-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: bare-metal-event-relay-subscription namespace: openshift-bare-metal-events spec: channel: \"stable\" name: bare-metal-event-relay source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f bare-metal-events-sub.yaml", "oc get csv -n openshift-bare-metal-events -o custom-columns=Name:.metadata.name,Phase:.status.phase", "oc get pods -n amq-interconnect", "NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h", "oc get pods -n openshift-bare-metal-events", "NAME READY STATUS RESTARTS AGE hw-event-proxy-operator-controller-manager-74d5649b7c-dzgtl 2/2 Running 0 25s", "curl https://<bmc_ip_address>/redfish/v1/EventService --insecure -H 'Content-Type: application/json' -u \"<bmc_username>:<password>\"", "{ \"@odata.context\": \"/redfish/v1/USDmetadata#EventService.EventService\", \"@odata.id\": \"/redfish/v1/EventService\", \"@odata.type\": \"#EventService.v1_0_2.EventService\", \"Actions\": { \"#EventService.SubmitTestEvent\": { \"[email protected]\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"target\": \"/redfish/v1/EventService/Actions/EventService.SubmitTestEvent\" } }, \"DeliveryRetryAttempts\": 3, \"DeliveryRetryIntervalSeconds\": 30, \"Description\": \"Event Service represents the properties for the service\", \"EventTypesForSubscription\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"[email protected]\": 5, \"Id\": \"EventService\", \"Name\": \"Event Service\", \"ServiceEnabled\": true, \"Status\": { \"Health\": \"OK\", \"HealthRollup\": \"OK\", \"State\": \"Enabled\" }, \"Subscriptions\": { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\" } }", "oc get route -n openshift-bare-metal-events", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hw-event-proxy hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com hw-event-proxy-service 9087 edge None", "apiVersion: metal3.io/v1alpha1 kind: BMCEventSubscription metadata: name: sub-01 namespace: openshift-machine-api spec: hostName: <hostname> 1 destination: <proxy_service_url> 2 context: ''", "oc create -f bmc_sub.yaml", "oc delete -f bmc_sub.yaml", "curl -i -k -X POST -H \"Content-Type: application/json\" -d '{\"Destination\": \"https://<proxy_service_url>\", \"Protocol\" : \"Redfish\", \"EventTypes\": [\"Alert\"], \"Context\": \"root\"}' -u <bmc_username>:<password> 'https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions' -v", "HTTP/1.1 201 Created Server: AMI MegaRAC Redfish Service Location: /redfish/v1/EventService/Subscriptions/1 Allow: GET, POST Access-Control-Allow-Origin: * Access-Control-Expose-Headers: X-Auth-Token Access-Control-Allow-Headers: X-Auth-Token Access-Control-Allow-Credentials: true Cache-Control: no-cache, must-revalidate Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json>; rel=describedby Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json> Link: </redfish/v1/EventService/Subscriptions>; path= ETag: \"1651135676\" Content-Type: application/json; charset=UTF-8 OData-Version: 4.0 Content-Length: 614 Date: Thu, 28 Apr 2022 08:47:57 GMT", "curl --globoff -H \"Content-Type: application/json\" -k -X GET --user <bmc_username>:<password> https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 435 100 435 0 0 399 0 0:00:01 0:00:01 --:--:-- 399 { \"@odata.context\": \"/redfish/v1/USDmetadata#EventDestinationCollection.EventDestinationCollection\", \"@odata.etag\": \"\" 1651137375 \"\", \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\", \"@odata.type\": \"#EventDestinationCollection.EventDestinationCollection\", \"Description\": \"Collection for Event Subscriptions\", \"Members\": [ { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions/1\" }], \"[email protected]\": 1, \"Name\": \"Event Subscriptions Collection\" }", "curl --globoff -L -w \"%{http_code} %{url_effective}\\n\" -k -u <bmc_username>:<password >-H \"Content-Type: application/json\" -d '{}' -X DELETE https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions/1", "apiVersion: \"event.redhat-cne.org/v1alpha1\" kind: \"HardwareEvent\" metadata: name: \"hardware-event\" spec: nodeSelector: node-role.kubernetes.io/hw-event: \"\" 1 logLevel: \"debug\" 2 msgParserTimeout: \"10\" 3", "oc create -f hardware-event.yaml", "apiVersion: v1 kind: Secret metadata: name: redfish-basic-auth type: Opaque stringData: 1 username: <bmc_username> password: <bmc_password> # BMC host DNS or IP address hostaddr: <bmc_host_ip_address>", "oc create -f hw-event-bmc-secret.yaml", "[ { \"id\": \"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\": \"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" } ]", "{ \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }", "{ \"id\":\"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\":\"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\":\"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\":\"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }", "OK", "containers: - name: cloud-event-sidecar image: cloud-event-sidecar args: - \"--metrics-addr=127.0.0.1:9091\" - \"--store-path=/store\" - \"--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" - \"--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\" 1 - \"--api-port=8089\"", "apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: \"true\" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/scalability_and_performance/using-rfhe
Chapter 12. Rebooting nodes
Chapter 12. Rebooting nodes You might need to reboot the nodes in the undercloud and overcloud. Note If you enabled instance HA (high availability) in your overcloud and if you need to shut down or reboot Compute nodes, see Chapter 3. Performing maintenance on the undercloud and overcloud with Instance HA in Configuring high availability for instances . Use the following procedures to understand how to reboot different node types. If you reboot all nodes in one role, it is advisable to reboot each node individually. If you reboot all nodes in a role simultaneously, service downtime can occur during the reboot operation. If you reboot all nodes in your OpenStack Platform environment, reboot the nodes in the following sequential order: Recommended node reboot order Reboot the undercloud node. Reboot Controller and other composable nodes. Reboot standalone Ceph MON nodes. Reboot Ceph Storage nodes. Reboot Object Storage service (swift) nodes. Reboot Compute nodes. 12.1. Rebooting the undercloud node Complete the following steps to reboot the undercloud node. Procedure Log in to the undercloud as the stack user. Reboot the undercloud: Wait until the node boots. 12.2. Rebooting Controller and composable nodes Reboot Controller nodes and standalone nodes based on composable roles, and exclude Compute nodes and Ceph Storage nodes. Procedure Log in to the node that you want to reboot. Optional: If the node uses Pacemaker resources, stop the cluster: [tripleo-admin@overcloud-controller-0 ~]USD sudo pcs cluster stop Reboot the node: [tripleo-admin@overcloud-controller-0 ~]USD sudo reboot Wait until the node boots. Verification Verify that the services are enabled. If the node uses Pacemaker services, check that the node has rejoined the cluster: [tripleo-admin@overcloud-controller-0 ~]USD sudo pcs status If the node uses Systemd services, check that all services are enabled: [tripleo-admin@overcloud-controller-0 ~]USD sudo systemctl status If the node uses containerized services, check that all containers on the node are active: [tripleo-admin@overcloud-controller-0 ~]USD sudo podman ps 12.3. Rebooting standalone Ceph MON nodes Complete the following steps to reboot standalone Ceph MON nodes. Procedure Log in to a Ceph MON node. Reboot the node: Wait until the node boots and rejoins the MON cluster. Repeat these steps for each MON node in the cluster. 12.4. Rebooting a Ceph Storage (OSD) cluster Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes. Prerequisites On a Ceph Monitor or Controller node that is running the ceph-mon service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status is active+clean : USD sudo cephadm -- shell ceph status If the Ceph cluster is healthy, it returns a status of HEALTH_OK . If the Ceph cluster status is unhealthy, it returns a status of HEALTH_WARN or HEALTH_ERR . For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or the Red Hat Ceph Storage 6 Troubleshooting Guide . Procedure Log in to a Ceph Monitor or Controller node that is running the ceph-mon service, and disable Ceph Storage cluster rebalancing temporarily: USD sudo cephadm shell -- ceph osd set noout USD sudo cephadm shell -- ceph osd set norebalance Note If you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you set the noout and norebalance flags. For example: sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring . Select the first Ceph Storage node that you want to reboot and log in to the node. Reboot the node: Wait until the node boots. Log in to the node and check the Ceph cluster status: USD sudo cephadm -- shell ceph status Check that the pgmap reports all pgs as normal ( active+clean ). Log out of the node, reboot the node, and check its status. Repeat this process until you have rebooted all Ceph Storage nodes. When complete, log in to a Ceph Monitor or Controller node that is running the ceph-mon service and enable Ceph cluster rebalancing: USD sudo cephadm shell -- ceph osd unset noout USD sudo cephadm shell -- ceph osd unset norebalance Note If you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you unset the noout and norebalance flags. For example: sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring Perform a final status check to verify that the cluster reports HEALTH_OK : USD sudo cephadm shell ceph status 12.5. Rebooting Object Storage service (swift) nodes The following procedure reboots Object Storage service (swift) nodes. Complete the following steps for every Object Storage node in your cluster. Procedure Log in to an Object Storage node. Reboot the node: Wait until the node boots. Repeat the reboot for each Object Storage node in the cluster. 12.6. Rebooting Compute nodes To ensure minimal downtime of instances in your Red Hat OpenStack Platform environment, the Migrating instances workflow outlines the steps you must complete to migrate instances from the Compute node that you want to reboot. Migrating instances workflow Decide whether to migrate instances to another Compute node before rebooting the node. Select and disable the Compute node that you want to reboot so that it does not provision new instances. Migrate the instances to another Compute node. Reboot the empty Compute node. Enable the empty Compute node. Prerequisites Before you reboot the Compute node, you must decide whether to migrate instances to another Compute node while the node is rebooting. Review the list of migration constraints that you might encounter when you migrate virtual machine instances between Compute nodes. For more information, see Migration constraints in Configuring the Compute service for instance creation . Note If you have a Multi-RHEL environment, and you want to migrate virtual machines from a Compute node that is running RHEL 9.2 to a Compute node that is running RHEL 8.4, only cold migration is supported. For more information about cold migration, see Cold migrating an instance in Configuring the Compute service for instance creation . If you cannot migrate the instances, you can set the following core template parameters to control the state of the instances after the Compute node reboots: NovaResumeGuestsStateOnHostBoot Determines whether to return instances to the same state on the Compute node after reboot. When set to False , the instances remain down and you must start them manually. The default value is False . NovaResumeGuestsShutdownTimeout Number of seconds to wait for an instance to shut down before rebooting. It is not recommended to set this value to 0 . The default value is 300 . For more information about overcloud parameters and their usage, see Overcloud parameters . Procedure Log in to the undercloud as the stack user. Retrieve a list of your Compute nodes to identify the host name of the node that you want to reboot: Identify the host name of the Compute node that you want to reboot. Disable the Compute service on the Compute node that you want to reboot: Replace <hostname> with the host name of your Compute node. List all instances on the Compute node: (overcloud)USD openstack server list --host <hostname> --all-projects Optional: To migrate the instances to another Compute node, complete the following steps: If you decide to migrate the instances to another Compute node, use one of the following commands: To migrate the instance to a different host, run the following command: (overcloud) USD openstack server migrate <instance_id> --live <target_host> --wait Replace <instance_id> with your instance ID. Replace <target_host> with the host that you are migrating the instance to. Let nova-scheduler automatically select the target host: (overcloud) USD nova live-migration <instance_id> Live migrate all instances at once: USD nova host-evacuate-live <hostname> Note The nova command might cause some deprecation warnings, which are safe to ignore. Wait until migration completes. Confirm that the migration was successful: (overcloud) USD openstack server list --host <hostname> --all-projects Continue to migrate instances until none remain on the Compute node. Log in to the Compute node and reboot the node: [tripleo-admin@overcloud-compute-0 ~]USD sudo reboot Wait until the node boots. Re-enable the Compute node: USD source ~/overcloudrc (overcloud) USD openstack compute service set <hostname> nova-compute --enable Check that the Compute node is enabled: (overcloud) USD openstack compute service list
[ "sudo reboot", "[tripleo-admin@overcloud-controller-0 ~]USD sudo pcs cluster stop", "[tripleo-admin@overcloud-controller-0 ~]USD sudo reboot", "[tripleo-admin@overcloud-controller-0 ~]USD sudo pcs status", "[tripleo-admin@overcloud-controller-0 ~]USD sudo systemctl status", "[tripleo-admin@overcloud-controller-0 ~]USD sudo podman ps", "sudo reboot", "sudo cephadm -- shell ceph status", "sudo cephadm shell -- ceph osd set noout sudo cephadm shell -- ceph osd set norebalance", "sudo reboot", "sudo cephadm -- shell ceph status", "sudo cephadm shell -- ceph osd unset noout sudo cephadm shell -- ceph osd unset norebalance", "sudo cephadm shell ceph status", "sudo reboot", "(undercloud)USD source ~/overcloudrc (overcloud)USD openstack compute service list", "(overcloud)USD openstack compute service list (overcloud)USD openstack compute service set <hostname> nova-compute --disable", "(overcloud)USD openstack server list --host <hostname> --all-projects", "(overcloud) USD openstack server migrate <instance_id> --live <target_host> --wait", "(overcloud) USD nova live-migration <instance_id>", "nova host-evacuate-live <hostname>", "(overcloud) USD openstack server list --host <hostname> --all-projects", "[tripleo-admin@overcloud-compute-0 ~]USD sudo reboot", "source ~/overcloudrc (overcloud) USD openstack compute service set <hostname> nova-compute --enable", "(overcloud) USD openstack compute service list" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/installing_and_managing_red_hat_openstack_platform_with_director/assembly_rebooting-nodes
Chapter 1. Overview of nodes
Chapter 1. Overview of nodes 1.1. About nodes A node is a virtual or bare-metal machine in a Kubernetes cluster. Worker nodes host your application containers, grouped as pods. The control plane nodes run services that are required to control the Kubernetes cluster. In OpenShift Container Platform, the control plane nodes contain more than just the Kubernetes services for managing the OpenShift Container Platform cluster. Having stable and healthy nodes in a cluster is fundamental to the smooth functioning of your hosted application. In OpenShift Container Platform, you can access, manage, and monitor a node through the Node object representing the node. Using the OpenShift CLI ( oc ) or the web console, you can perform the following operations on a node. The following components of a node are responsible for maintaining the running of pods and providing the Kubernetes runtime environment. Container runtime The container runtime is responsible for running containers. Kubernetes offers several runtimes such as containerd, cri-o, rktlet, and Docker. Kubelet Kubelet runs on nodes and reads the container manifests. It ensures that the defined containers have started and are running. The kubelet process maintains the state of work and the node server. Kubelet manages network rules and port forwarding. The kubelet manages containers that are created by Kubernetes only. Kube-proxy Kube-proxy runs on every node in the cluster and maintains the network traffic between the Kubernetes resources. A Kube-proxy ensures that the networking environment is isolated and accessible. DNS Cluster DNS is a DNS server which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. Read operations The read operations allow an administrator or a developer to get information about nodes in an OpenShift Container Platform cluster. List all the nodes in a cluster . Get information about a node, such as memory and CPU usage, health, status, and age. List pods running on a node . Management operations As an administrator, you can easily manage a node in an OpenShift Container Platform cluster through several tasks: Add or update node labels . A label is a key-value pair applied to a Node object. You can control the scheduling of pods using labels. Change node configuration using a custom resource definition (CRD), or the kubeletConfig object. Configure nodes to allow or disallow the scheduling of pods. Healthy worker nodes with a Ready status allow pod placement by default while the control plane nodes do not; you can change this default behavior by configuring the worker nodes to be unschedulable and the control plane nodes to be schedulable . Allocate resources for nodes using the system-reserved setting. You can allow OpenShift Container Platform to automatically determine the optimal system-reserved CPU and memory resources for your nodes, or you can manually determine and set the best resources for your nodes. Configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit, or both. Reboot a node gracefully using pod anti-affinity . Delete a node from a cluster by scaling down the cluster using a compute machine set. To delete a node from a bare-metal cluster, you must first drain all pods on the node and then manually delete the node. Enhancement operations OpenShift Container Platform allows you to do more than just access and manage nodes; as an administrator, you can perform the following tasks on nodes to make the cluster more efficient, application-friendly, and to provide a better environment for your developers. Manage node-level tuning for high-performance applications that require some level of kernel tuning by using the Node Tuning Operator . Enable TLS security profiles on the node to protect communication between the kubelet and the Kubernetes API server. Run background tasks on nodes automatically with daemon sets . You can create and use daemon sets to create shared storage, run a logging pod on every node, or deploy a monitoring agent on all nodes. Free node resources using garbage collection . You can ensure that your nodes are running efficiently by removing terminated containers and the images not referenced by any running pods. Add kernel arguments to a set of nodes . Configure an OpenShift Container Platform cluster to have worker nodes at the network edge (remote worker nodes). For information on the challenges of having remote worker nodes in an OpenShift Container Platform cluster and some recommended approaches for managing pods on a remote worker node, see Using remote worker nodes at the network edge . 1.2. About pods A pod is one or more containers deployed together on a node. As a cluster administrator, you can define a pod, assign it to run on a healthy node that is ready for scheduling, and manage. A pod runs as long as the containers are running. You cannot change a pod once it is defined and is running. Some operations you can perform when working with pods are: Read operations As an administrator, you can get information about pods in a project through the following tasks: List pods associated with a project , including information such as the number of replicas and restarts, current status, and age. View pod usage statistics such as CPU, memory, and storage consumption. Management operations The following list of tasks provides an overview of how an administrator can manage pods in an OpenShift Container Platform cluster. Control scheduling of pods using the advanced scheduling features available in OpenShift Container Platform: Node-to-pod binding rules such as pod affinity , node affinity , and anti-affinity . Node labels and selectors . Taints and tolerations . Pod topology spread constraints . Secondary scheduling . Configure the descheduler to evict pods based on specific strategies so that the scheduler reschedules the pods to more appropriate nodes. Configure how pods behave after a restart using pod controllers and restart policies . Limit both egress and ingress traffic on a pod . Add and remove volumes to and from any object that has a pod template . A volume is a mounted file system available to all the containers in a pod. Container storage is ephemeral; you can use volumes to persist container data. Enhancement operations You can work with pods more easily and efficiently with the help of various tools and features available in OpenShift Container Platform. The following operations involve using those tools and features to better manage pods. Operation User More information Create and use a horizontal pod autoscaler. Developer You can use a horizontal pod autoscaler to specify the minimum and the maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target. Using a horizontal pod autoscaler, you can automatically scale pods . Install and use a vertical pod autoscaler . Administrator and developer As an administrator, use a vertical pod autoscaler to better use cluster resources by monitoring the resources and the resource requirements of workloads. As a developer, use a vertical pod autoscaler to ensure your pods stay up during periods of high demand by scheduling pods to nodes that have enough resources for each pod. Provide access to external resources using device plugins. Administrator A device plugin is a gRPC service running on nodes (external to the kubelet), which manages specific hardware resources. You can deploy a device plugin to provide a consistent and portable solution to consume hardware devices across clusters. Provide sensitive data to pods using the Secret object . Administrator Some applications need sensitive information, such as passwords and usernames. You can use the Secret object to provide such information to an application pod. 1.3. About containers A container is the basic unit of an OpenShift Container Platform application, which comprises the application code packaged along with its dependencies, libraries, and binaries. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Linux container technologies are lightweight mechanisms for isolating running processes and limiting access to only designated resources. As an administrator, You can perform various tasks on a Linux container, such as: Copy files to and from a container . Allow containers to consume API objects . Execute remote commands in a container . Use port forwarding to access applications in a container . OpenShift Container Platform provides specialized containers called Init containers . Init containers run before application containers and can contain utilities or setup scripts not present in an application image. You can use an Init container to perform tasks before the rest of a pod is deployed. Apart from performing specific tasks on nodes, pods, and containers, you can work with the overall OpenShift Container Platform cluster to keep the cluster efficient and the application pods highly available. 1.4. About autoscaling pods on a node OpenShift Container Platform offers three tools that you can use to automatically scale the number of pods on your nodes and the resources allocated to pods. Horizontal Pod Autoscaler The Horizontal Pod Autoscaler (HPA) can automatically increase or decrease the scale of a replication controller or deployment configuration, based on metrics collected from the pods that belong to that replication controller or deployment configuration. For more information, see Automatically scaling pods with the horizontal pod autoscaler . Custom Metrics Autoscaler The Custom Metrics Autoscaler can automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not based only on CPU or memory. For more information, see Custom Metrics Autoscaler Operator overview . Vertical Pod Autoscaler The Vertical Pod Autoscaler (VPA) can automatically review the historic and current CPU and memory resources for containers in pods and can update the resource limits and requests based on the usage values it learns. For more information, see Automatically adjust pod resource levels with the vertical pod autoscaler . 1.5. Glossary of common terms for OpenShift Container Platform nodes This glossary defines common terms that are used in the node content. Container It is a lightweight and executable image that comprises software and all its dependencies. Containers virtualize the operating system, as a result, you can run containers anywhere from a data center to a public or private cloud to even a developer's laptop. Daemon set Ensures that a replica of the pod runs on eligible nodes in an OpenShift Container Platform cluster. egress The process of data sharing externally through a network's outbound traffic from a pod. garbage collection The process of cleaning up cluster resources, such as terminated containers and images that are not referenced by any running pods. Horizontal Pod Autoscaler(HPA) Implemented as a Kubernetes API resource and a controller. You can use the HPA to specify the minimum and maximum number of pods that you want to run. You can also specify the CPU or memory utilization that your pods should target. The HPA scales out and scales in pods when a given CPU or memory threshold is crossed. Ingress Incoming traffic to a pod. Job A process that runs to completion. A job creates one or more pod objects and ensures that the specified pods are successfully completed. Labels You can use labels, which are key-value pairs, to organise and select subsets of objects, such as a pod. Node A worker machine in the OpenShift Container Platform cluster. A node can be either be a virtual machine (VM) or a physical machine. Node Tuning Operator You can use the Node Tuning Operator to manage node-level tuning by using the TuneD daemon. It ensures custom tuning specifications are passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Self Node Remediation Operator The Operator runs on the cluster nodes and identifies and reboots nodes that are unhealthy. Pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. Toleration Indicates that the pod is allowed (but not required) to be scheduled on nodes or node groups with matching taints. You can use tolerations to enable the scheduler to schedule pods with matching taints. Taint A core object that comprises a key,value, and effect. Taints and tolerations work together to ensure that pods are not scheduled on irrelevant nodes.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/nodes/overview-of-nodes
Chapter 1. Introducing RHEL on public cloud platforms
Chapter 1. Introducing RHEL on public cloud platforms Public cloud platforms provide computing resources as a service. Instead of using on-premises hardware, you can run your IT workloads, including Red Hat Enterprise Linux (RHEL) systems, as public cloud instances. 1.1. Benefits of using RHEL in a public cloud RHEL as a cloud instance located on a public cloud platform has the following benefits over RHEL on-premises physical systems or virtual machines (VMs): Flexible and fine-grained allocation of resources A cloud instance of RHEL runs as a VM on a cloud platform, which typically means a cluster of remote servers maintained by the provider of the cloud service. Therefore, allocating hardware resources to the instance, such as a specific type of CPU or storage, happens on the software level and is easily customizable. In comparison to a local RHEL system, you are also not limited by the capabilities of your physical host. Instead, you can choose from a variety of features, based on selection offered by the cloud provider. Space and cost efficiency You do not need to own any on-premises servers to host your cloud workloads. This avoids the space, power, and maintenance requirements associated with physical hardware. Instead, on public cloud platforms, you pay the cloud provider directly for using a cloud instance. The cost is typically based on the hardware allocated to the instance and the time you spend using it. Therefore, you can optimize your costs based on your requirements. Software-controlled configurations The entire configuration of a cloud instance is saved as data on the cloud platform, and is controlled by software. Therefore, you can easily create, remove, clone, or migrate the instance. A cloud instance is also operated remotely in a cloud provider console and is connected to remote storage by default. In addition, you can back up the current state of a cloud instance as a snapshot at any time. Afterwards, you can load the snapshot to restore the instance to the saved state. Separation from the host and software compatibility Similarly to a local VM, the RHEL guest operating system on a cloud instance runs on a virtualized kernel. This kernel is separate from the host operating system and from the client system that you use to connect to the instance. Therefore, any operating system can be installed on the cloud instance. This means that on a RHEL public cloud instance, you can run RHEL-specific applications that cannot be used on your local operating system. In addition, even if the operating system of the instance becomes unstable or is compromised, your client system is not affected in any way. Additional resources What is public cloud? What is a hyperscaler? Types of cloud computing Public cloud use cases for RHEL Obtaining RHEL for public cloud deployments Why run Linux on AWS? 1.2. Public cloud use cases for RHEL Deploying on a public cloud provides many benefits, but might not be the most efficient solution in every scenario. If you are evaluating whether to migrate your RHEL deployments to the public cloud, consider whether your use case will benefit from the advantages of the public cloud. Beneficial use cases Deploying public cloud instances is very effective for flexibly increasing and decreasing the active computing power of your deployments, also known as scaling up and scaling down . Therefore, using RHEL on public cloud is recommended in the following scenarios: Clusters with high peak workloads and low general performance requirements. Scaling up and down based on your demands can be highly efficient in terms of resource costs. Quickly setting up or expanding your clusters. This avoids high upfront costs of setting up local servers. Cloud instances are not affected by what happens in your local environment. Therefore, you can use them for backup and disaster recovery. Potentially problematic use cases You are running an existing environment that cannot be adjusted. Customizing a cloud instance to fit the specific needs of an existing deployment may not be cost-effective in comparison with your current host platform. You are operating with a hard limit on your budget. Maintaining your deployment in a local data center typically provides less flexibility but more control over the maximum resource costs than the public cloud does. steps Obtaining RHEL for public cloud deployments Additional resources Should I migrate my application to the cloud? Here's how to decide. 1.3. Frequent concerns when migrating to a public cloud Moving your RHEL workloads from a local environment to a public cloud platform might raise concerns about the changes involved. The following are the most commonly asked questions. Will my RHEL work differently as a cloud instance than as a local virtual machine? In most respects, RHEL instances on a public cloud platform work the same as RHEL virtual machines on a local host, such as an on-premises server. Notable exceptions include: Instead of private orchestration interfaces, public cloud instances use provider-specific console interfaces for managing your cloud resources. Certain features, such as nested virtualization, may not work correctly. If a specific feature is critical for your deployment, check the feature's compatibility in advance with your chosen public cloud provider. Will my data stay safe in a public cloud as opposed to a local server? The data in your RHEL cloud instances is in your ownership, and your public cloud provider does not have any access to it. In addition, major cloud providers support data encryption in transit, which improves the security of data when migrating your virtual machines to the public cloud. The general security of your RHEL public cloud instances is managed as follows: Your public cloud provider is responsible for the security of the cloud hypervisor Red Hat provides the security features of the RHEL guest operating systems in your instances You manage the specific security settings and practices in your cloud infrastructure What effect does my geographic region have on the functionality of RHEL public cloud instances? You can use RHEL instances on a public cloud platform regardless of your geographical location. Therefore, you can run your instances in the same region as your on-premises server. However, hosting your instances in a physically distant region might cause high latency when operating them. In addition, depending on the public cloud provider, certain regions may provide additional features or be more cost-efficient. Before creating your RHEL instances, review the properties of the hosting regions available for your chosen cloud provider. 1.4. Obtaining RHEL for public cloud deployments To deploy a RHEL system in a public cloud environment, you need to: Select the optimal cloud provider for your use case, based on your requirements and the current offer on the market. The cloud providers currently certified for running RHEL instances are: Amazon Web Services (AWS) Google Cloud Platform (GCP) Microsoft Azure Note This document specifically talks about deploying RHEL on AWS. Create a RHEL cloud instance on your chosen cloud platform. For more information, see Methods for creating RHEL cloud instances . To keep your RHEL deployment up-to-date, use Red Hat Update Infrastructure (RHUI). Additional resources RHUI documentation Red Hat Open Hybrid Cloud 1.5. Methods for creating RHEL cloud instances To deploy a RHEL instance on a public cloud platform, you can use one of the following methods: Create a system image of RHEL and import it to the cloud platform. To create the system image, you can use the RHEL image builder or you can build the image manually. This method uses your existing RHEL subscription, and is also referred to as bring your own subscription (BYOS). You pre-pay a yearly subscription, and you can use your Red Hat customer discount. Your customer service is provided by Red Hat. For creating multiple images effectively, you can use the cloud-init tool. Purchase a RHEL instance directly from the cloud provider marketplace. You post-pay an hourly rate for using the service. Therefore, this method is also referred to as pay as you go (PAYG). Your customer service is provided by the cloud platform provider. Note For detailed instructions on using various methods to deploy RHEL instances on Amazon Web Services, see the following chapters in this document. Additional resources What is a golden image? Configuring and managing cloud-init for RHEL 8
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_rhel_8_on_amazon_web_services/introducing-rhel-on-public-cloud-platforms_cloud-content-AWS
Chapter 2. Getting started with Pacemaker
Chapter 2. Getting started with Pacemaker To familiarize yourself with the tools and processes you use to create a Pacemaker cluster, you can run the following procedures. They are intended for users who are interested in seeing what the cluster software looks like and how it is administered, without needing to configure a working cluster. Note These procedures do not create a supported Red Hat cluster, which requires at least two nodes and the configuration of a fencing device. For full information about Red Hat's support policies, requirements, and limitations for RHEL High Availability clusters, see Support Policies for RHEL High Availability Clusters . 2.1. Learning to use Pacemaker By working through this procedure, you will learn how to use Pacemaker to set up a cluster, how to display cluster status, and how to configure a cluster service. This example creates an Apache HTTP server as a cluster resource and shows how the cluster responds when the resource fails. In this example: The node is z1.example.com . The floating IP address is 192.168.122.120. Prerequisites A single node running RHEL 8 A floating IP address that resides on the same network as one of the node's statically assigned IP addresses The name of the node on which you are running is in your /etc/hosts file Procedure Install the Red Hat High Availability Add-On software packages from the High Availability channel, and start and enable the pcsd service. If you are running the firewalld daemon, enable the ports that are required by the Red Hat High Availability Add-On. Set a password for user hacluster on each node in the cluster and authenticate user hacluster for each node in the cluster on the node from which you will be running the pcs commands. This example is using only a single node, the node from which you are running the commands, but this step is included here since it is a necessary step in configuring a supported Red Hat High Availability multi-node cluster. Create a cluster named my_cluster with one member and check the status of the cluster. This command creates and starts the cluster in one step. A Red Hat High Availability cluster requires that you configure fencing for the cluster. The reasons for this requirement are described in the Red Hat Knowledgebase solution Fencing in a Red Hat High Availability Cluster . For this introduction, however, which is intended to show only how to use the basic Pacemaker commands, disable fencing by setting the stonith-enabled cluster option to false . Warning The use of stonith-enabled=false is completely inappropriate for a production cluster. It tells the cluster to simply pretend that failed nodes are safely fenced. Configure a web browser on your system and create a web page to display a simple text message. If you are running the firewalld daemon, enable the ports that are required by httpd . Note Do not use systemctl enable to enable any services that will be managed by the cluster to start at system boot. In order for the Apache resource agent to get the status of Apache, create the following addition to the existing configuration to enable the status server URL. Create IPaddr2 and apache resources for the cluster to manage. The 'IPaddr2' resource is a floating IP address that must not be one already associated with a physical node. If the 'IPaddr2' resource's NIC device is not specified, the floating IP must reside on the same network as the statically assigned IP address used by the node. You can display a list of all available resource types with the pcs resource list command. You can use the pcs resource describe resourcetype command to display the parameters you can set for the specified resource type. For example, the following command displays the parameters you can set for a resource of type apache : In this example, the IP address resource and the apache resource are both configured as part of a group named apachegroup , which ensures that the resources are kept together to run on the same node when you are configuring a working multi-node cluster. After you have configured a cluster resource, you can use the pcs resource config command to display the options that are configured for that resource. Point your browser to the website you created using the floating IP address you configured. This should display the text message you defined. Stop the apache web service and check the cluster status. Using killall -9 simulates an application-level crash. Check the cluster status. You should see that stopping the web service caused a failed action, but that the cluster software restarted the service and you should still be able to access the website. You can clear the failure status on the resource that failed once the service is up and running again and the failed action notice will no longer appear when you view the cluster status. When you are finished looking at the cluster and the cluster status, stop the cluster services on the node. Even though you have only started services on one node for this introduction, the --all parameter is included since it would stop cluster services on all nodes on an actual multi-node cluster. 2.2. Learning to configure failover The following procedure provides an introduction to creating a Pacemaker cluster running a service that will fail over from one node to another when the node on which the service is running becomes unavailable. By working through this procedure, you can learn how to create a service in a two-node cluster and you can then observe what happens to that service when it fails on the node on which it running. This example procedure configures a two-node Pacemaker cluster running an Apache HTTP server. You can then stop the Apache service on one node to see how the service remains available. In this example: The nodes are z1.example.com and z2.example.com . The floating IP address is 192.168.122.120. Prerequisites Two nodes running RHEL 8 that can communicate with each other A floating IP address that resides on the same network as one of the node's statically assigned IP addresses The name of the node on which you are running is in your /etc/hosts file Procedure On both nodes, install the Red Hat High Availability Add-On software packages from the High Availability channel, and start and enable the pcsd service. If you are running the firewalld daemon, on both nodes enable the ports that are required by the Red Hat High Availability Add-On. On both nodes in the cluster, set a password for user hacluster . Authenticate user hacluster for each node in the cluster on the node from which you will be running the pcs commands. Create a cluster named my_cluster with both nodes as cluster members. This command creates and starts the cluster in one step. You only need to run this from one node in the cluster because pcs configuration commands take effect for the entire cluster. On one node in cluster, run the following command. A Red Hat High Availability cluster requires that you configure fencing for the cluster. The reasons for this requirement are described in the Red Hat Knowledgebase solution Fencing in a Red Hat High Availability Cluster . For this introduction, however, to show only how failover works in this configuration, disable fencing by setting the stonith-enabled cluster option to false . + Warning The use of stonith-enabled=false is completely inappropriate for a production cluster. It tells the cluster to simply pretend that failed nodes are safely fenced. + After creating a cluster and disabling fencing, check the status of the cluster. Note When you run the pcs cluster status command, it may show output that temporarily differs slightly from the examples as the system components start up. On both nodes, configure a web browser and create a web page to display a simple text message. If you are running the firewalld daemon, enable the ports that are required by httpd . Note Do not use systemctl enable to enable any services that will be managed by the cluster to start at system boot. In order for the Apache resource agent to get the status of Apache, on each node in the cluster create the following addition to the existing configuration to enable the status server URL. Create IPaddr2 and apache resources for the cluster to manage. The 'IPaddr2' resource is a floating IP address that must not be one already associated with a physical node. If the 'IPaddr2' resource's NIC device is not specified, the floating IP must reside on the same network as the statically assigned IP address used by the node. You can display a list of all available resource types with the pcs resource list command. You can use the pcs resource describe resourcetype command to display the parameters you can set for the specified resource type. For example, the following command displays the parameters you can set for a resource of type apache : In this example, the IP address resource and the apache resource are both configured as part of a group named apachegroup , which ensures that the resources are kept together to run on the same node. Run the following commands from one node in the cluster: Note that in this instance, the apachegroup service is running on node z1.example.com. Access the website you created, stop the service on the node on which it is running, and note how the service fails over to the second node. Point a browser to the website you created using the floating IP address you configured. This should display the text message you defined, displaying the name of the node on which the website is running. Stop the apache web service. Using killall -9 simulates an application-level crash. Check the cluster status. You should see that stopping the web service caused a failed action, but that the cluster software restarted the service on the node on which it had been running and you should still be able to access the web browser. Clear the failure status once the service is up and running again. Put the node on which the service is running into standby mode. Note that since we have disabled fencing we can not effectively simulate a node-level failure (such as pulling a power cable) because fencing is required for the cluster to recover from such situations. Check the status of the cluster and note where the service is now running. Access the website. There should be no loss of service, although the display message should indicate the node on which the service is now running. To restore cluster services to the first node, take the node out of standby mode. This will not necessarily move the service back to that node. For final cleanup, stop the cluster services on both nodes.
[ "yum install pcs pacemaker fence-agents-all systemctl start pcsd.service systemctl enable pcsd.service", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload", "passwd hacluster pcs host auth z1.example.com", "pcs cluster setup my_cluster --start z1.example.com pcs cluster status Cluster Status: Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Thu Oct 11 16:11:18 2018 Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z1.example.com 1 node configured 0 resources configured PCSD Status: z1.example.com: Online", "pcs property set stonith-enabled=false", "yum install -y httpd wget firewall-cmd --permanent --add-service=http firewall-cmd --reload cat <<-END >/var/www/html/index.html <html> <body>My Test Site - USD(hostname)</body> </html> END", "cat <<-END > /etc/httpd/conf.d/status.conf <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 Allow from ::1 </Location> END", "pcs resource describe apache", "pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.122.120 --group apachegroup pcs resource create WebSite ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl=\"http://localhost/server-status\" --group apachegroup pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 1 node configured 2 resources configured Online: [ z1.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com PCSD Status: z1.example.com: Online", "pcs resource config WebSite Resource: WebSite (class=ocf provider=heartbeat type=apache) Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status Operations: start interval=0s timeout=40s (WebSite-start-interval-0s) stop interval=0s timeout=60s (WebSite-stop-interval-0s) monitor interval=1min (WebSite-monitor-interval-1min)", "killall -9 httpd", "pcs status Cluster name: my_cluster Current DC: z1.example.com (version 1.1.13-10.el7-44eb2dd) - partition with quorum 1 node and 2 resources configured Online: [ z1.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com Failed Resource Actions: * WebSite_monitor_60000 on z1.example.com 'not running' (7): call=13, status=complete, exitreason='none', last-rc-change='Thu Oct 11 23:45:50 2016', queued=0ms, exec=0ms PCSD Status: z1.example.com: Online", "pcs resource cleanup WebSite", "pcs cluster stop --all", "yum install pcs pacemaker fence-agents-all systemctl start pcsd.service systemctl enable pcsd.service", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload", "passwd hacluster", "pcs host auth z1.example.com z2.example.com", "pcs cluster setup my_cluster --start z1.example.com z2.example.com", "pcs property set stonith-enabled=false", "pcs cluster status Cluster Status: Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Thu Oct 11 16:11:18 2018 Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z1.example.com 2 nodes configured 0 resources configured PCSD Status: z1.example.com: Online z2.example.com: Online", "yum install -y httpd wget firewall-cmd --permanent --add-service=http firewall-cmd --reload cat <<-END >/var/www/html/index.html <html> <body>My Test Site - USD(hostname)</body> </html> END", "cat <<-END > /etc/httpd/conf.d/status.conf <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 Allow from ::1 </Location> END", "pcs resource describe apache", "pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.122.120 --group apachegroup pcs resource create WebSite ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl=\"http://localhost/server-status\" --group apachegroup pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 2 nodes configured 2 resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com PCSD Status: z1.example.com: Online z2.example.com: Online", "killall -9 httpd", "pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 2 nodes configured 2 resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com Failed Resource Actions: * WebSite_monitor_60000 on z1.example.com 'not running' (7): call=31, status=complete, exitreason='none', last-rc-change='Fri Feb 5 21:01:41 2016', queued=0ms, exec=0ms", "pcs resource cleanup WebSite", "pcs node standby z1.example.com", "pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 2 nodes configured 2 resources configured Node z1.example.com: standby Online: [ z2.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z2.example.com WebSite (ocf::heartbeat:apache): Started z2.example.com", "pcs node unstandby z1.example.com", "pcs cluster stop --all" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_getting-started-with-pacemaker-configuring-and-managing-high-availability-clusters
15.4. Creating and Managing Users for a TPS
15.4. Creating and Managing Users for a TPS There are three defined roles for TPS users, which function as groups for the TPS: Agents , who perform actual token management operations, such setting the token status and changing token policies Administrators , who manage users for the TPS subsystem and have limited control over tokens Operators , who have no management control but are able to view and list tokens, certificates, and activities performed through the TPS Additional groups cannot be added for the TPS. All of the TPS subsystem users are authenticated against an LDAP directory database that contains their certificate (because accessing the TPS's web services requires certificate-based authentication), and the authentication process checks the TPS group entries - ou=TUS Agents , ou=TUS Administrators , and ou=TUS Operators - to see to which roles the user belongs, using Apache's mod_tokendb module. Users for the TPS are added and managed through the Web UI or the CLI. The Web UI is accessible at https:// server.example.com :8443/tps/ui/ . To use the Web UI or the CLI, the TPS administrator has to authenticate using a user certificate. 15.4.1. Listing and Searching for Users 15.4.1.1. From the Web UI To list users from the Web UI: Click the Accounts tab. Click the Users menu item. The list of users appears on the page. To search for certain users, write the keyword in the search field and press Enter . To list all users again, remove the keyword and press Enter . 15.4.1.2. From the Command Line To list users from the CLI, run: To view user details from the CLI, run: 15.4.2. Adding Users 15.4.2.1. From the Web UI To add a user from the Web UI: Click the Accounts tab. Click the Users menu item. Click the Add button on the Users page. Fill in the user ID, full name, and TPS profile. Click the Save button. 15.4.2.1.1. From the Command Line To add a user from the CLI, run: 15.4.3. Setting Profiles for Users A TPS profile is much like a CA profile; it defines rules for processing different types of tokens. The profile is assigned automatically to a token based on some characteristic of the token, like the CUID. Users can only see tokens for the profiles which are assigned to them. Note A user can only see entries relating to the profile configured for it, including both token operations and tokens themselves. For an administrator to be able to search and manage all tokens configured in the TPS, the administrator user entry should be set to All profiles . Setting specific profiles for users is a simple way to control access for operators and agents to specific users or token types. Token profiles are sets of policies and configurations that are applied to a token. Token profiles are mapped to tokens automatically based on some kind of attribute in the token itself, such as a CCUID range. Token profiles are created as other certificate profiles in the CA profile directory and are then added to the TPS configuration file, CS.cfg , to map the CA's token profile to the token type. Configuring token mapping is covered in Section 6.7, "Mapping Resolver Configuration" . To manage user profiles from the Web UI: Click the Accounts tab. Click the Users menu item. Click the user name of the user you want to modify. Click the Edit link. In the TPS Profile field, enter the profile names separated by commas, or enter All Profiles . Click the Save button. 15.4.4. Managing User Roles A role is just a group within the TPS. Each role can view different tabs of the TPS services pages. The group is editable, so it is possible to add and remove role assignments for a user. A user can belong to more than one role or group. The bootstrap user, for example, belongs to all three groups. 15.4.4.1. From the Web UI To manage group members from the Web UI: Click the Accounts tab. Click the Groups menu item. Click the name of the group that you want to change, for example TPS Agents. To add a user to this group: Click the Add button. Enter the user ID. Click the Add button. To remove a user from this group: Select the check box to the user. Click the Remove button. Click the OK button. 15.4.4.2. From the Command Line To list groups from the CLI, run: To list group members from the CLI, run: To add a user to a group from the CLI, run: To delete a user from a group from the CLI, run: 15.4.5. Managing User Certificates User certificates can be managed from the CLI: To list user certificates, run: To add a certificate to a user: Obtain a user certificate for the new user. Requesting and submitting certificates is explained in Chapter 5, Requesting, Enrolling, and Managing Certificates . Important A TPS administrator must have a signing certificate. The recommended profile to use is Manual User Signing and Encryption Certificates Enrollment. Run the following command: To remove a certificate from a user, run: 15.4.6. Renewing TPS Agent and Administrator Certificates Regenerating the certificate takes its original key and its original profile and request, and recreates an identical key with a new validity period and expiration date. The TPS has a bootstrap user that was created at the time the subsystem was created. A new certificate can be requested for this user when their original one expires, using one of the default renewal profiles. Certificates for administrative users can be renewed directly in the end user enrollment forms, using the serial number of the original certificate. Renew the user certificates through the CA's end users forms, as described in Section 5.4.1.1.2, "Certificate-Based Renewal" . This must be the same CA as first issued the certificate (or a clone of it). Agent certificates can be renewed by using the certificate-based renewal form in the end entities page, Self-renew user SSL client certificate . This form recognizes and updates the certificate stored in the browser's certificate store directly. Note It is also possible to renew the certificate using certutil , as described in Section 17.3.3, "Renewing Certificates Using certutil" . Rather than using the certificate stored in a browser to initiate renewal, certutil uses an input file with the original key. Add the new certificate to the user and remove the old certificate as described in Section 15.4.5, "Managing User Certificates" . 15.4.7. Deleting Users Warning It is possible to delete the last user account, and the operation cannot be undone. Be very careful about the user which is selected to be deleted. To delete users from the Web UI: Click the Accounts tab. Click the Users menu item. Select the check box to the users to be deleted. Click the Remove button. Click the OK button. To delete a user from the CLI, run:
[ "pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-find", "pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-show username", "pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-add username --fullName full_name", "pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-group-find", "pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-group-member-find group_name", "pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-group-member-add group_name user_name", "pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-group-member-del group_name user_name", "pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-cert-find user_name", "pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-cert-add user_name --serial cert_serial_number", "pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-cert-del user_name cert_id", "pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-del user_name" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/managing-user-and-groups-for_a_tps
4.6. Online Data Relocation
4.6. Online Data Relocation You can move data while the system is in use with the pvmove command. The pvmove command breaks up the data to be moved into sections and creates a temporary mirror to move each section. For more information on the operation of the pvmove command, see the pvmove (8) man page. Note In order to perform a pvmove operation in a cluster, you should ensure that the cmirror package is installed and the cmirrord service is running. The following command moves all allocated space off the physical volume /dev/sdc1 to other free physical volumes in the volume group: The following command moves just the extents of the logical volume MyLV . Since the pvmove command can take a long time to execute, you may want to run the command in the background to avoid display of progress updates in the foreground. The following command moves all extents allocated to the physical volume /dev/sdc1 over to /dev/sdf1 in the background. The following command reports the progress of the pvmove command as a percentage at five second intervals.
[ "pvmove /dev/sdc1", "pvmove -n MyLV /dev/sdc1", "pvmove -b /dev/sdc1 /dev/sdf1", "pvmove -i5 /dev/sdd1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/online_relocation
Chapter 4. Red Hat OpenShift Cluster Manager
Chapter 4. Red Hat OpenShift Cluster Manager Red Hat OpenShift Cluster Manager is a managed service where you can install, modify, operate, and upgrade your Red Hat OpenShift clusters. This service allows you to work with all of your organization's clusters from a single dashboard. OpenShift Cluster Manager guides you to install OpenShift Container Platform, Red Hat OpenShift Service on AWS (ROSA), and OpenShift Dedicated clusters. It is also responsible for managing both OpenShift Container Platform clusters after self-installation as well as your ROSA and OpenShift Dedicated clusters. You can use OpenShift Cluster Manager to do the following actions: Create new clusters View cluster details and metrics Manage your clusters with tasks such as scaling, changing node labels, networking, authentication Manage access control Monitor clusters Schedule upgrades 4.1. Accessing Red Hat OpenShift Cluster Manager You can access OpenShift Cluster Manager with your configured OpenShift account. Prerequisites You have an account that is part of an OpenShift organization. If you are creating a cluster, your organization has specified quota. Procedure Log in to OpenShift Cluster Manager using your login credentials. 4.2. General actions On the top right of the cluster page, there are some actions that a user can perform on the entire cluster: Open console launches a web console so that the cluster owner can issue commands to the cluster. Actions drop-down menu allows the cluster owner to rename the display name of the cluster, change the amount of load balancers and persistent storage on the cluster, if applicable, manually set the node count, and delete the cluster. Refresh icon forces a refresh of the cluster. 4.3. Cluster tabs Selecting an active, installed cluster shows tabs associated with that cluster. The following tabs display after the cluster's installation completes: Overview Access control Add-ons Networking Insights Advisor Machine pools Support Settings 4.3.1. Overview tab The Overview tab provides information about how your cluster was configured: Cluster ID is the unique identification for the created cluster. This ID can be used when issuing commands to the cluster from the command line. Type shows the OpenShift version that the cluster is using. Region is the server region. Provider shows which cloud provider that the cluster was built upon. Availability shows which type of availability zone that the cluster uses, either single or multizone. Version is the OpenShift version that is installed on the cluster. If there is an update available, you can update from this field. Created at shows the date and time that the cluster was created. Owner identifies who created the cluster and has owner rights. Subscription type shows the subscription model that was selected on creation. Infrastructure type is the type of account that the cluster uses. Status displays the current status of the cluster. Total vCPU shows the total available virtual CPU for this cluster. Total memory shows the total available memory for this cluster. Load balancers Persistent storage displays the amount of storage that is available on this cluster. Nodes shows the actual and desired nodes on the cluster. These numbers might not match due to cluster scaling. Network field shows the address and prefixes for network connectivity. Resource usage section of the tab displays the resources in use with a graph. Advisor recommendations section gives insight in relation to security, performance, availability, and stablility. This section requires the use of remote health functionality. See Using Insights to identify issues with your cluster . Cluster history section shows everything that has been done with the cluster including creation and when a new version is identified. 4.3.2. Access control tab The Access control tab allows the cluster owner to set up an identity provider, grant elevated permissions, and grant roles to other users. Prerequisites You must be the cluster owner or have the correct permissions to grant roles on the cluster. Procedure Select the Grant role button. Enter the Red Hat account login for the user that you wish to grant a role on the cluster. Select the Grant role button on the dialog box. The dialog box closes, and the selected user shows the "Cluster Editor" access. 4.3.3. Add-ons tab The Add-ons tab displays all of the optional add-ons that can be added to the cluster. Select the desired add-on, and then select Install below the description for the add-on that displays. 4.3.4. Insights Advisor tab The Insights Advisor tab uses the Remote Health functionality of the OpenShift Container Platform to identify and mitigate risks to security, performance, availability, and stability. See Using Insights to identify issues with your cluster in the OpenShift Container Platform documentation. 4.3.5. Machine pools tab The Machine pools tab allows the cluster owner to create new machine pools, if there is enough available quota, or edit an existing machine pool. Selecting the More options > Scale opens the "Edit node count" dialog. In this dialog, you can change the node count per availability zone. If autoscaling is enabled, you can also set the range for autoscaling. 4.3.6. Support tab In the Support tab, you can add notification contacts for individuals that should receive cluster notifications. The username or email address that you provide must relate to a user account in the Red Hat organization where the cluster is deployed. Also from this tab, you can open a support case to request technical support for your cluster. 4.3.7. Settings tab The Settings tab provides a few options for the cluster owner: Monitoring , which is enabled by default, allows for reporting done on user-defined actions. See Understanding the monitoring stack . Update strategy allows you to determine if the cluster automatically updates on a certain day of the week at a specified time or if all updates are scheduled manually. Node draining sets the duration that protected workloads are respected during updates. When this duration has passed, the node is forcibly removed. Update status shows the current version and if there are any updates available. 4.4. Additional resources For the complete documentation for OpenShift Cluster Manager, see OpenShift Cluster Manager documentation .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/architecture/ocm-overview-ocp
Chapter 2. Installing Satellite Server
Chapter 2. Installing Satellite Server When the intended host for Satellite Server is in a disconnected environment, you can install Satellite Server by using an external computer to download an ISO image of the packages, and copying the packages to the system you want to install Satellite Server on. This method is not recommended for any other situation as ISO images might not contain the latest updates, bug fixes, and functionality. Use the following procedures to install Satellite Server, perform the initial configuration, and import subscription manifests. Before you continue, consider which manifests are relevant for your environment. For more information on manifests, see Managing Red Hat Subscriptions in Managing content . Note You cannot register Satellite Server to itself. 2.1. Downloading the binary DVD images Use this procedure to download the ISO images for Red Hat Enterprise Linux and Red Hat Satellite. Procedure Go to Red Hat Customer Portal and log in. Click DOWNLOADS . Select Red Hat Enterprise Linux . Ensure that you have the correct product and version for your environment. Product Variant is set to Red Hat Enterprise Linux for x86_64 . Version is set to the latest minor version of the product you plan to use as the base operating system. Architecture is set to the 64 bit version. On the Product Software tab, download the Binary DVD image for the latest Red Hat Enterprise Linux for x86_64 version. Click DOWNLOADS and select Red Hat Satellite . Ensure that you have the correct product and version for your environment. Product Variant is set to Red Hat Satellite . Version is set to the latest minor version of the product you plan to use. On the Product Software tab, download the Binary DVD image for the latest Red Hat Satellite version. Copy the ISO files to /var/tmp on the Satellite base operating system or other accessible storage device. 2.2. Configuring the base operating system with offline repositories Use this procedure to configure offline repositories for Red Hat Enterprise Linux 9 or Red Hat Enterprise Linux 8, and Red Hat Satellite ISO images. Procedure Create a directory to serve as the mount point for the ISO file corresponding to the version of the base operating system. Mount the ISO image for Red Hat Enterprise Linux to the mount point. To copy the ISO file's repository data file and change permissions, enter: Edit the repository data file and add the baseurl directive. Verify that the repository has been configured. Create a directory to serve as the mount point for the ISO file of Satellite Server. Mount the ISO image for Satellite Server to the mount point. 2.3. Optional: Using fapolicyd on Satellite Server By enabling fapolicyd on your Satellite Server, you can provide an additional layer of security by monitoring and controlling access to files and directories. The fapolicyd daemon uses the RPM database as a repository of trusted binaries and scripts. You can turn on or off the fapolicyd on your Satellite Server or Capsule Server at any point. 2.3.1. Installing fapolicyd on Satellite Server You can install fapolicyd along with Satellite Server or can be installed on an existing Satellite Server. If you are installing fapolicyd along with the new Satellite Server, the installation process will detect the fapolicyd in your Red Hat Enterprise Linux host and deploy the Satellite Server rules automatically. Prerequisites Ensure your host has access to the BaseOS repositories of Red Hat Enterprise Linux. Procedure For a new installation, install fapolicyd: For an existing installation, install fapolicyd using satellite-maintain packages install: Start the fapolicyd service: Verification Verify that the fapolicyd service is running correctly: New Satellite Server or Capsule Server installations In case of new Satellite Server or Capsule Server installation, follow the standard installation procedures after installing and enabling fapolicyd on your Red Hat Enterprise Linux host. Additional resources For more information on fapolicyd, see Blocking and allowing applications using fapolicyd in Red Hat Enterprise Linux 9 Security hardening or Blocking and allowing applications using fapolicyd in Red Hat Enterprise Linux 8 Security hardening . 2.4. Installing the Satellite packages from the offline repositories Use this procedure to install the Satellite packages from the offline repositories. Procedure Ensure the ISO images for Red Hat Enterprise Linux Server and Red Hat Satellite are mounted: Import the Red Hat GPG keys: Ensure the base operating system is up to date with the Binary DVD image: Change to the directory where the Satellite ISO is mounted: Run the installation script in the mounted directory: Note The script contains a command that enables the satellite:el8 module. Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Lifecycle . If you have successfully installed the Satellite packages, the following message is displayed: Install is complete. Please run satellite-installer --scenario satellite . 2.5. Resolving package dependency errors If there are package dependency errors during installation of Satellite Server packages, you can resolve the errors by downloading and installing packages from Red Hat Customer Portal. For more information about resolving dependency errors, see the KCS solution How can I use the yum output to solve yum dependency errors? . If you have successfully installed the Satellite packages, skip this procedure. Procedure Go to the Red Hat Customer Portal and log in. Click DOWNLOADS . Click the product that contains the package that you want to download. Ensure that you have the correct Product Variant , Version , and Architecture for your environment. Click the Packages tab. In the Search field, enter the name of the package. Click the package. From the Version list, select the version of the package. At the bottom of the page, click Download Now . Copy the package to the Satellite base operating system. On Satellite Server, change to the directory where the package is located: Install the package locally: Change to the directory where the Satellite ISO is mounted: Verify that you have resolved the package dependency errors by installing Satellite Server packages. If there are further package dependency errors, repeat this procedure. Note The script contains a command that enables the satellite:el8 module. Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Lifecycle . If you have successfully installed the Satellite packages, the following message is displayed: Install is complete. Please run satellite-installer --scenario satellite . 2.6. Configuring Satellite Server Install Satellite Server using the satellite-installer installation script. Choose from one of the following methods: Section 2.6.1, "Configuring Satellite installation" . This method is performed by running the installation script with one or more command options. The command options override the corresponding default initial configuration options and are recorded in the Satellite answer file. You can run the script as often as needed to configure any necessary options. 2.6.1. Configuring Satellite installation This initial configuration procedure creates an organization, location, user name, and password. After the initial configuration, you can create additional organizations and locations if required. The initial configuration also installs PostgreSQL databases on the same server. The installation process can take tens of minutes to complete. If you are connecting remotely to the system, use a utility such as tmux that allows suspending and reattaching a communication session so that you can check the installation progress in case you become disconnected from the remote system. If you lose connection to the shell where the installation command is running, see the log at /var/log/foreman-installer/satellite.log to determine if the process completed successfully. Considerations Use the satellite-installer --scenario satellite --help command to display the most commonly used options and any default values. Use the satellite-installer --scenario satellite --full-help command to display advanced options. Specify a meaningful value for the option: --foreman-initial-organization . This can be your company name. An internal label that matches the value is also created and cannot be changed afterwards. If you do not specify a value, an organization called Default Organization with the label Default_Organization is created. You can rename the organization name but not the label. By default, all configuration files configured by the installer are managed. When satellite-installer runs, it overwrites any manual changes to the managed files with the intended values. This means that running the installer on a broken system should restore it to working order, regardless of changes made. For more information on how to apply custom configuration on other services, see Applying Custom Configuration to Satellite . Procedure Enter the following command with any additional options that you want to use: The script displays its progress and writes logs to /var/log/foreman-installer/satellite.log . Unmount the ISO images: 2.7. Disabling subscription connection Disable subscription connection on disconnected Satellite Server to avoid connecting to the Red Hat Portal. This will also prevent you from refreshing the manifest and updating upstream entitlements. Procedure In the Satellite web UI, navigate to Administer > Settings . Click the Content tab. Set the Subscription Connection Enabled value to No . CLI procedure Enter the following command on Satellite Server: 2.8. Importing a Red Hat subscription manifest into Satellite Server Use the following procedure to import a Red Hat subscription manifest into Satellite Server. Note Simple Content Access (SCA) is set on the organization, not the manifest. Importing a manifest does not change your organization's Simple Content Access status. Simple Content Access simplifies the subscription experience for administrators. For more information, see the Subscription Management Administration Guide for Red Hat Enterprise Linux on the Red Hat Customer Portal. Prerequisites Ensure you have a Red Hat subscription manifest exported from the Red Hat Customer Portal. For more information, see Using manifests for a disconnected Satellite Server in Subscription Central . Ensure that you disable subscription connection on your Satellite Server. For more information, see Section 2.7, "Disabling subscription connection" . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions and click Manage Manifest . In the Manage Manifest window, click Choose File . Navigate to the location that contains the Red Hat subscription manifest file, then click Open . CLI procedure Copy the Red Hat subscription manifest file from your local machine to Satellite Server: Log in to Satellite Server as the root user and import the Red Hat subscription manifest file: You can now enable repositories and import Red Hat content. For more information, see Importing Content in Managing content .
[ "scp localfile username@hostname:remotefile", "mkdir /media/rhel", "mount -o loop rhel-DVD .iso /media/rhel", "cp /media/rhel/media.repo /etc/yum.repos.d/rhel.repo chmod u+w /etc/yum.repos.d/rhel.repo", "[RHEL-BaseOS] name=Red Hat Enterprise Linux BaseOS mediaid=None metadata_expire=-1 gpgcheck=0 cost=500 baseurl=file:///media/rhel/BaseOS/ [RHEL-AppStream] name=Red Hat Enterprise Linux Appstream mediaid=None metadata_expire=-1 gpgcheck=0 cost=500 baseurl=file:///media/rhel/AppStream/", "yum repolist", "mkdir /media/sat6", "mount -o loop sat6-DVD .iso /media/sat6", "dnf install fapolicyd", "satellite-maintain packages install fapolicyd", "systemctl enable --now fapolicyd", "systemctl status fapolicyd", "findmnt -t iso9660", "rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release", "dnf upgrade", "cd /media/sat6/", "./install_packages", "cd /path-to-package/", "dnf install package_name", "cd /media/sat6/", "./install_packages", "satellite-installer --scenario satellite --foreman-initial-organization \" My_Organization \" --foreman-initial-location \" My_Location \" --foreman-initial-admin-username admin_user_name --foreman-initial-admin-password admin_password", "umount /media/sat6 umount /media/rhel8", "hammer settings set --name subscription_connection_enabled --value false", "scp ~/ manifest_file .zip root@ satellite.example.com :~/.", "hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_satellite_server_in_a_disconnected_network_environment/installing_server_disconnected_satellite
Chapter 1. Support policy for Red Hat build of OpenJDK
Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these versions remain similar to Oracle JDK versions that are designated as long-term support (LTS). A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration..
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.10/rn-openjdk-support-policy
Chapter 9. DeploymentConfig [apps.openshift.io/v1]
Chapter 9. DeploymentConfig [apps.openshift.io/v1] Description Deployment Configs define the template for a pod and manages deploying new images or configuration changes. A single deployment configuration is usually analogous to a single micro-service. Can support many different deployment patterns, including full restart, customizable rolling updates, and fully custom behaviors, as well as pre- and post- deployment hooks. Each individual deployment is represented as a replication controller. A deployment is "triggered" when its configuration is changed or a tag in an Image Stream is changed. Triggers can be disabled to allow manual control over a deployment. The "strategy" determines how the deployment is carried out and may be changed at any time. The latestVersion field is updated when a new deployment is triggered by any means. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Deprecated: Use deployments or other means for declarative updates for pods instead. Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object DeploymentConfigSpec represents the desired state of the deployment. status object DeploymentConfigStatus represents the current deployment state. 9.1.1. .spec Description DeploymentConfigSpec represents the desired state of the deployment. Type object Property Type Description minReadySeconds integer MinReadySeconds is the minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused boolean Paused indicates that the deployment config is paused resulting in no new deployments on template changes or changes in the template caused by other triggers. replicas integer Replicas is the number of desired replicas. revisionHistoryLimit integer RevisionHistoryLimit is the number of old ReplicationControllers to retain to allow for rollbacks. This field is a pointer to allow for differentiation between an explicit zero and not specified. Defaults to 10. (This only applies to DeploymentConfigs created via the new group API resource, not the legacy resource.) selector object (string) Selector is a label query over pods that should match the Replicas count. strategy object DeploymentStrategy describes how to perform a deployment. template PodTemplateSpec Template is the object that describes the pod that will be created if insufficient replicas are detected. test boolean Test ensures that this deployment config will have zero replicas except while a deployment is running. This allows the deployment config to be used as a continuous deployment test - triggering on images, running the deployment, and then succeeding or failing. Post strategy hooks and After actions can be used to integrate successful deployment with an action. triggers array Triggers determine how updates to a DeploymentConfig result in new deployments. If no triggers are defined, a new deployment can only occur as a result of an explicit client update to the DeploymentConfig with a new LatestVersion. If null, defaults to having a config change trigger. triggers[] object DeploymentTriggerPolicy describes a policy for a single trigger that results in a new deployment. 9.1.2. .spec.strategy Description DeploymentStrategy describes how to perform a deployment. Type object Property Type Description activeDeadlineSeconds integer ActiveDeadlineSeconds is the duration in seconds that the deployer pods for this deployment config may be active on a node before the system actively tries to terminate them. annotations object (string) Annotations is a set of key, value pairs added to custom deployer and lifecycle pre/post hook pods. customParams object CustomDeploymentStrategyParams are the input to the Custom deployment strategy. labels object (string) Labels is a set of key, value pairs added to custom deployer and lifecycle pre/post hook pods. recreateParams object RecreateDeploymentStrategyParams are the input to the Recreate deployment strategy. resources ResourceRequirements Resources contains resource requirements to execute the deployment and any hooks. rollingParams object RollingDeploymentStrategyParams are the input to the Rolling deployment strategy. type string Type is the name of a deployment strategy. 9.1.3. .spec.strategy.customParams Description CustomDeploymentStrategyParams are the input to the Custom deployment strategy. Type object Property Type Description command array (string) Command is optional and overrides CMD in the container Image. environment array (EnvVar) Environment holds the environment which will be given to the container for Image. image string Image specifies a container image which can carry out a deployment. 9.1.4. .spec.strategy.recreateParams Description RecreateDeploymentStrategyParams are the input to the Recreate deployment strategy. Type object Property Type Description mid object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. post object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. pre object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. timeoutSeconds integer TimeoutSeconds is the time to wait for updates before giving up. If the value is nil, a default will be used. 9.1.5. .spec.strategy.recreateParams.mid Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.6. .spec.strategy.recreateParams.mid.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.7. .spec.strategy.recreateParams.mid.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.8. .spec.strategy.recreateParams.mid.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.9. .spec.strategy.recreateParams.post Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.10. .spec.strategy.recreateParams.post.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.11. .spec.strategy.recreateParams.post.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.12. .spec.strategy.recreateParams.post.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.13. .spec.strategy.recreateParams.pre Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.14. .spec.strategy.recreateParams.pre.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.15. .spec.strategy.recreateParams.pre.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.16. .spec.strategy.recreateParams.pre.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.17. .spec.strategy.rollingParams Description RollingDeploymentStrategyParams are the input to the Rolling deployment strategy. Type object Property Type Description intervalSeconds integer IntervalSeconds is the time to wait between polling deployment status after update. If the value is nil, a default will be used. maxSurge IntOrString MaxSurge is the maximum number of pods that can be scheduled above the original number of pods. Value can be an absolute number (ex: 5) or a percentage of total pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0 if MaxUnavailable is 0. By default, 25% is used. Example: when this is set to 30%, the new RC can be scaled up by 30% immediately when the rolling update starts. Once old pods have been killed, new RC can be scaled up further, ensuring that total number of pods running at any time during the update is atmost 130% of original pods. maxUnavailable IntOrString MaxUnavailable is the maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total pods at the start of update (ex: 10%). Absolute number is calculated from percentage by rounding down. This cannot be 0 if MaxSurge is 0. By default, 25% is used. Example: when this is set to 30%, the old RC can be scaled down by 30% immediately when the rolling update starts. Once new pods are ready, old RC can be scaled down further, followed by scaling up the new RC, ensuring that at least 70% of original number of pods are available at all times during the update. post object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. pre object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. timeoutSeconds integer TimeoutSeconds is the time to wait for updates before giving up. If the value is nil, a default will be used. updatePeriodSeconds integer UpdatePeriodSeconds is the time to wait between individual pod updates. If the value is nil, a default will be used. 9.1.18. .spec.strategy.rollingParams.post Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.19. .spec.strategy.rollingParams.post.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.20. .spec.strategy.rollingParams.post.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.21. .spec.strategy.rollingParams.post.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.22. .spec.strategy.rollingParams.pre Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.23. .spec.strategy.rollingParams.pre.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.24. .spec.strategy.rollingParams.pre.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.25. .spec.strategy.rollingParams.pre.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.26. .spec.triggers Description Triggers determine how updates to a DeploymentConfig result in new deployments. If no triggers are defined, a new deployment can only occur as a result of an explicit client update to the DeploymentConfig with a new LatestVersion. If null, defaults to having a config change trigger. Type array 9.1.27. .spec.triggers[] Description DeploymentTriggerPolicy describes a policy for a single trigger that results in a new deployment. Type object Property Type Description imageChangeParams object DeploymentTriggerImageChangeParams represents the parameters to the ImageChange trigger. type string Type of the trigger 9.1.28. .spec.triggers[].imageChangeParams Description DeploymentTriggerImageChangeParams represents the parameters to the ImageChange trigger. Type object Required from Property Type Description automatic boolean Automatic means that the detection of a new tag value should result in an image update inside the pod template. containerNames array (string) ContainerNames is used to restrict tag updates to the specified set of container names in a pod. If multiple triggers point to the same containers, the resulting behavior is undefined. Future API versions will make this a validation error. If ContainerNames does not point to a valid container, the trigger will be ignored. Future API versions will make this a validation error. from ObjectReference From is a reference to an image stream tag to watch for changes. From.Name is the only required subfield - if From.Namespace is blank, the namespace of the current deployment trigger will be used. lastTriggeredImage string LastTriggeredImage is the last image to be triggered. 9.1.29. .status Description DeploymentConfigStatus represents the current deployment state. Type object Required latestVersion observedGeneration replicas updatedReplicas availableReplicas unavailableReplicas Property Type Description availableReplicas integer AvailableReplicas is the total number of available pods targeted by this deployment config. conditions array Conditions represents the latest available observations of a deployment config's current state. conditions[] object DeploymentCondition describes the state of a deployment config at a certain point. details object DeploymentDetails captures information about the causes of a deployment. latestVersion integer LatestVersion is used to determine whether the current deployment associated with a deployment config is out of sync. observedGeneration integer ObservedGeneration is the most recent generation observed by the deployment config controller. readyReplicas integer Total number of ready pods targeted by this deployment. replicas integer Replicas is the total number of pods targeted by this deployment config. unavailableReplicas integer UnavailableReplicas is the total number of unavailable pods targeted by this deployment config. updatedReplicas integer UpdatedReplicas is the total number of non-terminated pods targeted by this deployment config that have the desired template spec. 9.1.30. .status.conditions Description Conditions represents the latest available observations of a deployment config's current state. Type array 9.1.31. .status.conditions[] Description DeploymentCondition describes the state of a deployment config at a certain point. Type object Required type status Property Type Description lastTransitionTime Time The last time the condition transitioned from one status to another. lastUpdateTime Time The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of deployment condition. 9.1.32. .status.details Description DeploymentDetails captures information about the causes of a deployment. Type object Required causes Property Type Description causes array Causes are extended data associated with all the causes for creating a new deployment causes[] object DeploymentCause captures information about a particular cause of a deployment. message string Message is the user specified change message, if this deployment was triggered manually by the user 9.1.33. .status.details.causes Description Causes are extended data associated with all the causes for creating a new deployment Type array 9.1.34. .status.details.causes[] Description DeploymentCause captures information about a particular cause of a deployment. Type object Required type Property Type Description imageTrigger object DeploymentCauseImageTrigger represents details about the cause of a deployment originating from an image change trigger type string Type of the trigger that resulted in the creation of a new deployment 9.1.35. .status.details.causes[].imageTrigger Description DeploymentCauseImageTrigger represents details about the cause of a deployment originating from an image change trigger Type object Required from Property Type Description from ObjectReference From is a reference to the changed object which triggered a deployment. The field may have the kinds DockerImage, ImageStreamTag, or ImageStreamImage. 9.2. API endpoints The following API endpoints are available: /apis/apps.openshift.io/v1/deploymentconfigs GET : list or watch objects of kind DeploymentConfig /apis/apps.openshift.io/v1/watch/deploymentconfigs GET : watch individual changes to a list of DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs DELETE : delete collection of DeploymentConfig GET : list or watch objects of kind DeploymentConfig POST : create a DeploymentConfig /apis/apps.openshift.io/v1/watch/namespaces/{namespace}/deploymentconfigs GET : watch individual changes to a list of DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name} DELETE : delete a DeploymentConfig GET : read the specified DeploymentConfig PATCH : partially update the specified DeploymentConfig PUT : replace the specified DeploymentConfig /apis/apps.openshift.io/v1/watch/namespaces/{namespace}/deploymentconfigs/{name} GET : watch changes to an object of kind DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/status GET : read status of the specified DeploymentConfig PATCH : partially update status of the specified DeploymentConfig PUT : replace status of the specified DeploymentConfig 9.2.1. /apis/apps.openshift.io/v1/deploymentconfigs HTTP method GET Description list or watch objects of kind DeploymentConfig Table 9.1. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfigList schema 401 - Unauthorized Empty 9.2.2. /apis/apps.openshift.io/v1/watch/deploymentconfigs HTTP method GET Description watch individual changes to a list of DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead. Table 9.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.3. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs HTTP method DELETE Description delete collection of DeploymentConfig Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind DeploymentConfig Table 9.5. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a DeploymentConfig Table 9.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.7. Body parameters Parameter Type Description body DeploymentConfig schema Table 9.8. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 202 - Accepted DeploymentConfig schema 401 - Unauthorized Empty 9.2.4. /apis/apps.openshift.io/v1/watch/namespaces/{namespace}/deploymentconfigs HTTP method GET Description watch individual changes to a list of DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead. Table 9.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.5. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name} Table 9.10. Global path parameters Parameter Type Description name string name of the DeploymentConfig HTTP method DELETE Description delete a DeploymentConfig Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DeploymentConfig Table 9.13. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DeploymentConfig Table 9.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.15. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DeploymentConfig Table 9.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.17. Body parameters Parameter Type Description body DeploymentConfig schema Table 9.18. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 401 - Unauthorized Empty 9.2.6. /apis/apps.openshift.io/v1/watch/namespaces/{namespace}/deploymentconfigs/{name} Table 9.19. Global path parameters Parameter Type Description name string name of the DeploymentConfig HTTP method GET Description watch changes to an object of kind DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 9.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.7. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/status Table 9.21. Global path parameters Parameter Type Description name string name of the DeploymentConfig HTTP method GET Description read status of the specified DeploymentConfig Table 9.22. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DeploymentConfig Table 9.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.24. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DeploymentConfig Table 9.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.26. Body parameters Parameter Type Description body DeploymentConfig schema Table 9.27. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/workloads_apis/deploymentconfig-apps-openshift-io-v1
Chapter 3. Configuring certificates issued by ADCS for smart card authentication in IdM
Chapter 3. Configuring certificates issued by ADCS for smart card authentication in IdM To configure smart card authentication in IdM for users whose certificates are issued by Active Directory (AD) certificate services: Your deployment is based on cross-forest trust between Identity Management (IdM) and Active Directory (AD). You want to allow smart card authentication for users whose accounts are stored in AD. Certificates are created and stored in Active Directory Certificate Services (ADCS). For an overview of smart card authentication, see Understanding smart card authentication . Configuration is accomplished in the following steps: Copying CA and user certificates from Active Directory to the IdM server and client Configuring the IdM server and clients for smart card authentication using ADCS certificates Converting a PFX (PKCS#12) file to be able to store the certificate and private key into the smart card Configuring timeouts in the sssd.conf file Creating certificate mapping rules for smart card authentication Prerequisites Identity Management (IdM) and Active Directory (AD) trust is installed For details, see Installing trust between IdM and AD . Active Directory Certificate Services (ADCS) is installed and certificates for users are generated 3.1. Windows Server settings required for trust configuration and certificate usage You must configure the following on the Windows Server: Active Directory Certificate Services (ADCS) is installed Certificate Authority is created Optional: If you are using Certificate Authority Web Enrollment, the Internet Information Services (IIS) must be configured Export the certificate: Key must have 2048 bits or more Include a private key You will need a certificate in the following format: Personal Information Exchange - PKCS #12(.PFX) Enable certificate privacy 3.2. Copying certificates from Active Directory using sftp To be able to use smart card authetication, you need to copy the following certificate files: A root CA certificate in the CER format: adcs-winserver-ca.cer on your IdM server. A user certificate with a private key in the PFX format: aduser1.pfx on an IdM client. Note This procedure expects SSH access is allowed. If SSH is unavailable the user must copy the file from the AD Server to the IdM server and client. Procedure Connect from the IdM server and copy the adcs-winserver-ca.cer root certificate to the IdM server: Connect from the IdM client and copy the aduser1.pfx user certificate to the client: Now the CA certificate is stored in the IdM server and the user certificates is stored on the client machine. 3.3. Configuring the IdM server and clients for smart card authentication using ADCS certificates You must configure the IdM (Identity Management) server and clients to be able to use smart card authentication in the IdM environment. IdM includes the ipa-advise scripts which makes all necessary changes: Install necessary packages Configure IdM server and clients Copy the CA certificates into the expected locations You can run ipa-advise on your IdM server. Follow this procedure to configure your server and clients for smart card authentication: On an IdM server: Preparing the ipa-advise script to configure your IdM server for smart card authentication. On an IdM server: Preparing the ipa-advise script to configure your IdM client for smart card authentication. On an IdM server: Applying the the ipa-advise server script on the IdM server using the AD certificate. Moving the client script to the IdM client machine. On an IdM client: Applying the the ipa-advise client script on the IdM client using the AD certificate. Prerequisites The certificate has been copied to the IdM server. Obtain the Kerberos ticket. Log in as a user with administration rights. Procedure On the IdM server, use the ipa-advise script for configuring a client: On the IdM server, use the ipa-advise script for configuring a server: On the IdM server, execute the script: It configures the IdM Apache HTTP Server. It enables Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) on the Key Distribution Center (KDC). It configures the IdM Web UI to accept smart card authorization requests. Copy the sc_client.sh script to the client system: Copy the Windows certificate to the client system: On the client system, run the client script: The CA certificate is installed in the correct format on the IdM server and client systems and step is to copy the user certificates onto the smart card itself. 3.4. Converting the PFX file Before you store the PFX (PKCS#12) file into the smart card, you must: Convert the file to the PEM format Extract the private key and the certificate to two different files Prerequisites The PFX file is copied into the IdM client machine. Procedure On the IdM client, into the PEM format: Extract the key into the separate file: Extract the public certificate into the separate file: At this point, you can store the aduser1.key and aduser1.crt into the smart card. 3.5. Installing tools for managing and using smart cards Prerequisites The gnutls-utils package is installed. The opensc package is installed. The pcscd service is running. Before you can configure your smart card, you must install the corresponding tools, which can generate certificates and start the pscd service. Procedure Install the opensc and gnutls-utils packages: Start the pcscd service. Verification Verify that the pcscd service is up and running 3.6. Preparing your smart card and uploading your certificates and keys to your smart card Follow this procedure to configure your smart card with the pkcs15-init tool, which helps you to configure: Erasing your smart card Setting new PINs and optional PIN Unblocking Keys (PUKs) Creating a new slot on the smart card Storing the certificate, private key, and public key in the slot If required, locking the smart card settings as certain smart cards require this type of finalization Note The pkcs15-init tool may not work with all smart cards. You must use the tools that work with the smart card you are using. Prerequisites The opensc package, which includes the pkcs15-init tool, is installed. For more details, see Installing tools for managing and using smart cards . The card is inserted in the reader and connected to the computer. You have a private key, a public key, and a certificate to store on the smart card. In this procedure, testuser.key , testuserpublic.key , and testuser.crt are the names used for the private key, public key, and the certificate. You have your current smart card user PIN and Security Officer PIN (SO-PIN). Procedure Erase your smart card and authenticate yourself with your PIN: The card has been erased. Initialize your smart card, set your user PIN and PUK, and your Security Officer PIN and PUK: The pcks15-init tool creates a new slot on the smart card. Set a label and the authentication ID for the slot: The label is set to a human-readable value, in this case, testuser . The auth-id must be two hexadecimal values, in this case it is set to 01 . Store and label the private key in the new slot on the smart card: Note The value you specify for --id must be the same when storing your private key and storing your certificate in the step. Specifying your own value for --id is recommended as otherwise a more complicated value is calculated by the tool. Store and label the certificate in the new slot on the smart card: Optional: Store and label the public key in the new slot on the smart card: Note If the public key corresponds to a private key or certificate, specify the same ID as the ID of the private key or certificate. Optional: Certain smart cards require you to finalize the card by locking the settings: At this stage, your smart card includes the certificate, private key, and public key in the newly created slot. You have also created your user PIN and PUK and the Security Officer PIN and PUK. 3.7. Configuring timeouts in sssd.conf Authentication with a smart card certificate might take longer than the default timeouts used by SSSD. Time out expiration can be caused by: Slow reader A forwarding form a physical device into a virtual environment Too many certificates stored on the smart card Slow response from the OCSP (Online Certificate Status Protocol) responder if OCSP is used to verify the certificates In this case you can prolong the following timeouts in the sssd.conf file, for example, to 60 seconds: p11_child_timeout krb5_auth_timeout Prerequisites You must be logged in as root. Procedure Open the sssd.conf file: Change the value of p11_child_timeout : Change the value of krb5_auth_timeout : Save the settings. Now, the interaction with the smart card is allowed to run for 1 minute (60 seconds) before authentication will fail with a timeout. 3.8. Creating certificate mapping rules for smart card authentication If you want to use one certificate for a user who has accounts in AD (Active Directory) and in IdM (Identity Management), you can create a certificate mapping rule on the IdM server. After creating such a rule, the user is able to authenticate with their smart card in both domains. For details about certificate mapping rules, see Certificate mapping rules for configuring authentication .
[ "root@idmserver ~]# sftp [email protected] [email protected]'s password: Connected to [email protected]. sftp> cd <Path to certificates> sftp> ls adcs-winserver-ca.cer aduser1.pfx sftp> sftp> get adcs-winserver-ca.cer Fetching <Path to certificates>/adcs-winserver-ca.cer to adcs-winserver-ca.cer <Path to certificates>/adcs-winserver-ca.cer 100% 1254 15KB/s 00:00 sftp quit", "sftp [email protected] [email protected]'s password: Connected to [email protected]. sftp> cd /<Path to certificates> sftp> get aduser1.pfx Fetching <Path to certificates>/aduser1.pfx to aduser1.pfx <Path to certificates>/aduser1.pfx 100% 1254 15KB/s 00:00 sftp quit", "ipa-advise config-client-for-smart-card-auth > sc_client.sh", "ipa-advise config-server-for-smart-card-auth > sc_server.sh", "sh -x sc_server.sh adcs-winserver-ca.cer", "scp sc_client.sh [email protected]:/root Password: sc_client.sh 100% 2857 1.6MB/s 00:00", "scp adcs-winserver-ca.cer [email protected]:/root Password: adcs-winserver-ca.cer 100% 1254 952.0KB/s 00:00", "sh -x sc_client.sh adcs-winserver-ca.cer", "openssl pkcs12 -in aduser1.pfx -out aduser1_cert_only.pem -clcerts -nodes Enter Import Password:", "openssl pkcs12 -in adduser1.pfx -nocerts -out adduser1.pem > aduser1.key", "openssl pkcs12 -in adduser1.pfx -clcerts -nokeys -out aduser1_cert_only.pem > aduser1.crt", "yum -y install opensc gnutls-utils", "systemctl start pcscd", "systemctl status pcscd", "pkcs15-init --erase-card --use-default-transport-keys Using reader with a card: Reader name PIN [Security Officer PIN] required. Please enter PIN [Security Officer PIN]:", "pkcs15-init --create-pkcs15 --use-default-transport-keys --pin 963214 --puk 321478 --so-pin 65498714 --so-puk 784123 Using reader with a card: Reader name", "pkcs15-init --store-pin --label testuser --auth-id 01 --so-pin 65498714 --pin 963214 --puk 321478 Using reader with a card: Reader name", "pkcs15-init --store-private-key testuser.key --label testuser_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name", "pkcs15-init --store-certificate testuser.crt --label testuser_crt --auth-id 01 --id 01 --format pem --pin 963214 Using reader with a card: Reader name", "pkcs15-init --store-public-key testuserpublic.key --label testuserpublic_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name", "pkcs15-init -F", "vim /etc/sssd/sssd.conf", "[pam] p11_child_timeout = 60", "[domain/IDM.EXAMPLE.COM] krb5_auth_timeout = 60" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_smart_card_authentication/configuring-certificates-issued-by-adcs-for-smart-card-authentication-in-idm_managing-smart-card-authentication
2. Related Documentation
2. Related Documentation For more information about using Red Hat Enterprise Linux, refer to the following resources: Red Hat Enterprise Linux Installation Guide - Provides information regarding installation of Red Hat Enterprise Linux. Red Hat Enterprise Linux Introduction to System Administration - Provides introductory information for new Red Hat Enterprise Linux system administrators. Red Hat Enterprise Linux System Administration Guide - Provides more detailed information about configuring Red Hat Enterprise Linux to suit your particular needs as a user. Red Hat Enterprise Linux Reference Guide - Provides detailed information suited for more experienced users to reference when needed, as opposed to step-by-step instructions. Red Hat Enterprise Linux Security Guide - Details the planning and the tools involved in creating a secured computing environment for the data center, workplace, and home. For more information about Red Hat Cluster Suite for Red Hat Enterprise Linux, refer to the following resources: Red Hat Cluster Suite Overview - Provides a high level overview of the Red Hat Cluster Suite. Configuring and Managing a Red Hat Cluster - Provides information about installing, configuring and managing Red Hat Cluster components. LVM Administrator's Guide: Configuration and Administration - Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment. Using GNBD with Global File System - Provides an overview on using Global Network Block Device (GNBD) with Red Hat GFS. Using Device-Mapper Multipath - Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux. Linux Virtual Server Administration - Provides information on configuring high-performance systems and services with the Linux Virtual Server (LVS). Red Hat Cluster Suite Release Notes - Provides information about the current release of Red Hat Cluster Suite. Red Hat Cluster Suite documentation and other Red Hat documents are available in HTML and PDF versions online at the following location: http://www.redhat.com/docs
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/related_documentation-gfs
11.2. Adding Services and Certificates for Services
11.2. Adding Services and Certificates for Services While services can use keytabs, some services require certificates for access. In that case, a service can be added (or modified) to include a certificate with its service entry. 11.2.1. Adding Services and Certificates from the Web UI Open the Identity tab, and select the Services subtab. Click the Add link at the top of the services list. Select the service type from the drop-down menu, and give it a name. Select the hostname of the IdM host on which the service is running. The hostname is used to construct the full service principal name. Click the Add and Edit button to go directly to the service entry page. Scroll to the bottom of the page, to the Service Certificate section. Click the New Certificate button to create the service certificate. 11.2.2. Adding Services and Certificates from the Command Line Create the service principal. The service is recognized through a name like service/FQDN : For example: USD ipa service-add HTTP/server.example.com ------------------------------------------------------- Added service "HTTP/[email protected]" ------------------------------------------------------- Principal: HTTP/[email protected] Managed by: ipaserver.example.com Create a certificate for the service. Be sure to copy the keytab to the appropriate directory for the service. For example: Note Use the --add option to create the service automatically when requesting the certificate. Alternatively, use the getcert command, which creates and manages the certificate through certmonger . The options are described more in Section B.1, "Requesting a Certificate with certmonger" .
[ "[jsmith@ipaserver ~]USD kinit admin [jsmith@ipaserver ~]USD ipa service-add serviceName/hostname", "ipa service-add HTTP/server.example.com ------------------------------------------------------- Added service \"HTTP/[email protected]\" ------------------------------------------------------- Principal: HTTP/[email protected] Managed by: ipaserver.example.com", "ipa cert-request --principal=HTTP/web.example.com example.csr", "ipa-getcert request -d /etc/httpd/alias -n Server-Cert -K HTTP/client1.example.com -N 'CN=client1.example.com,O=EXAMPLE.COM'" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/request-service-service
Release notes for Red Hat build of OpenJDK 11.0.17
Release notes for Red Hat build of OpenJDK 11.0.17 Red Hat build of OpenJDK 11 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.17/index
31.3.3. Firewall Considerations
31.3.3. Firewall Considerations If you are performing the installation where the VNC viewer system is a workstation on a different subnet from the target system, you may run in to network routing problems. VNC works fine so long as your viewer system has a route to the target system and ports 5900 and 5901 are open. If your environment has a firewall, make sure ports 5900 and 5901 are open between your workstation and the target system. In addition to passing the vnc boot parameter, you may also want to pass the vncpassword parameter in these scenarios. While the password is sent in plain text over the network, it does provide an extra step before a viewer can connect to a system. Once the viewer connects to the target system over VNC, no other connections are permitted. These limitations are usually sufficient for installation purposes. Important Be sure to use a temporary password for the vncpassword option. It should not be a password you use on any systems, especially a real root password. If you continue to have trouble, consider using the vncconnect parameter. In this mode of operation, you start the viewer on your system first telling it to listen for an incoming connection. Pass vncconnect= HOST at the boot prompt and the installer will attempt to connect to the specified HOST (either a hostname or IP address).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/vnc_whitepaper-firewall-considerations
Chapter 2. Uninstalling OpenShift Pipelines
Chapter 2. Uninstalling OpenShift Pipelines Cluster administrators can uninstall the Red Hat OpenShift Pipelines Operator by performing the following steps: Delete the Custom Resources (CRs) for the optional components, TektonHub and TektonResult , if these CRs exist, and then delete the TektonConfig CR. Caution If you uninstall the Operator without removing the CRs of optional components, you cannot remove the components later. Uninstall the Red Hat OpenShift Pipelines Operator. Delete the Custom Resource Definitions (CRDs) of the operator.tekton.dev group. Uninstalling only the Operator will not remove the Red Hat OpenShift Pipelines components created by default when the Operator is installed. 2.1. Deleting the OpenShift Pipelines Custom Resources If the Custom Resources (CRs) for the optional components, TektonHub and TektonResult , exist, delete these CRs. Then delete the TektonConfig CR. Procedure In the Administrator perspective of the web console, navigate to Administration CustomResourceDefinitions . Type TektonHub in the Filter by name field to search for the TektonHub Custom Resource Definition (CRD). Click the name of the TektonHub CRD to display the details page for the CRD. Click the Instances tab. If an instance is displayed, click the Options menu for the displayed instance. Select Delete TektonHub . Click Delete to confirm the deletion of the CR. Repeat these steps, searching for TektonResult and then TektonConfig in the Filter by name box. If any instances are found for these CRDs, delete these instances. Note Deleting the CRs also deletes the Red Hat OpenShift Pipelines components and all the tasks and pipelines on the cluster. Important If you uninstall the Operator without removing the TektonHub and TektonResult CRs, you cannot remove the Tekton Hub and Tekton Results components later. 2.2. Uninstalling the Red Hat OpenShift Pipelines Operator You can uninstall the Red Hat OpenShift Pipelines Operator by using the Administrator perspective in the web console. Procedure From the Operators OperatorHub page, use the Filter by keyword box to search for the Red Hat OpenShift Pipelines Operator. Click the Red Hat OpenShift Pipelines Operator tile. The Operator tile indicates that the Operator is installed. In the Red Hat OpenShift Pipelines Operator description page, click Uninstall . In the Uninstall Operator? window, select Delete all operand instances for this operator , and then click Uninstall . Warning When you uninstall the OpenShift Pipelines Operator, all resources within the openshift-pipelines target namespace where OpenShift Pipelines is installed are lost, including the secrets you configured. 2.3. Deleting the Custom Resource Definitions of the operator.tekton.dev group Delete the Custom Resource Definitions (CRDs) of the operator.tekton.dev group. These CRDs are created by default during the installation of the Red Hat OpenShift Pipelines Operator. Procedure In the Administrator perspective of the web console, navigate to Administration CustomResourceDefinitions . Type operator.tekton.dev in the Filter by name box to search for the CRDs in the operator.tekton.dev group. To delete each of the displayed CRDs, complete the following steps: Click the Options menu . Select Delete CustomResourceDefinition . Click Delete to confirm the deletion of the CRD. Additional resources You can learn more about uninstalling Operators on OpenShift Container Platform in the deleting Operators from a cluster section.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/installing_and_configuring/uninstalling-pipelines
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.16/proc-providing-feedback-on-redhat-documentation
Chapter 10. Viewing audit logs
Chapter 10. Viewing audit logs OpenShift Container Platform auditing provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. 10.1. About the API audit log Audit works at the API server level, logging all requests coming to the server. Each audit log contains the following information: Table 10.1. Audit log fields Field Description level The audit level at which the event was generated. auditID A unique audit ID, generated for each request. stage The stage of the request handling when this event instance was generated. requestURI The request URI as sent by the client to a server. verb The Kubernetes verb associated with the request. For non-resource requests, this is the lowercase HTTP method. user The authenticated user information. impersonatedUser Optional. The impersonated user information, if the request is impersonating another user. sourceIPs Optional. The source IPs, from where the request originated and any intermediate proxies. userAgent Optional. The user agent string reported by the client. Note that the user agent is provided by the client, and must not be trusted. objectRef Optional. The object reference this request is targeted at. This does not apply for List -type requests, or non-resource requests. responseStatus Optional. The response status, populated even when the ResponseObject is not a Status type. For successful responses, this will only include the code. For non-status type error responses, this will be auto-populated with the error message. requestObject Optional. The API object from the request, in JSON format. The RequestObject is recorded as is in the request (possibly re-encoded as JSON), prior to version conversion, defaulting, admission or merging. It is an external versioned object type, and might not be a valid object on its own. This is omitted for non-resource requests and is only logged at request level and higher. responseObject Optional. The API object returned in the response, in JSON format. The ResponseObject is recorded after conversion to the external type, and serialized as JSON. This is omitted for non-resource requests and is only logged at response level. requestReceivedTimestamp The time that the request reached the API server. stageTimestamp The time that the request reached the current audit stage. annotations Optional. An unstructured key value map stored with an audit event that may be set by plugins invoked in the request serving chain, including authentication, authorization and admission plugins. Note that these annotations are for the audit event, and do not correspond to the metadata.annotations of the submitted object. Keys should uniquely identify the informing component to avoid name collisions, for example podsecuritypolicy.admission.k8s.io/policy . Values should be short. Annotations are included in the metadata level. Example output for the Kubernetes API server: {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"ad209ce1-fec7-4130-8192-c4cc63f1d8cd","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s","verb":"update","user":{"username":"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client","uid":"dd4997e3-d565-4e37-80f8-7fc122ccd785","groups":["system:serviceaccounts","system:serviceaccounts:openshift-kube-controller-manager","system:authenticated"]},"sourceIPs":["::1"],"userAgent":"cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat","objectRef":{"resource":"configmaps","namespace":"openshift-kube-controller-manager","name":"cert-recovery-controller-lock","uid":"5c57190b-6993-425d-8101-8337e48c7548","apiVersion":"v1","resourceVersion":"574307"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2020-04-02T08:27:20.200962Z","stageTimestamp":"2020-04-02T08:27:20.206710Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:openshift:operator:kube-controller-manager-recovery\" of ClusterRole \"cluster-admin\" to ServiceAccount \"localhost-recovery-client/openshift-kube-controller-manager\""}} 10.2. Viewing the audit logs You can view the logs for the OpenShift API server, Kubernetes API server, OpenShift OAuth API server, and OpenShift OAuth server for each control plane node. Procedure To view the audit logs: View the OpenShift API server audit logs: List the OpenShift API server audit logs that are available for each control plane node: USD oc adm node-logs --role=master --path=openshift-apiserver/ Example output ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T00-12-19.834.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T00-11-49.835.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T00-13-00.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log View a specific OpenShift API server audit log by providing the node name and the log name: USD oc adm node-logs <node_name> --path=openshift-apiserver/<log_name> For example: USD oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=openshift-apiserver/audit-2021-03-09T00-12-19.834.log Example output {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"381acf6d-5f30-4c7d-8175-c9c317ae5893","stage":"ResponseComplete","requestURI":"/metrics","verb":"get","user":{"username":"system:serviceaccount:openshift-monitoring:prometheus-k8s","uid":"825b60a0-3976-4861-a342-3b2b561e8f82","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"]},"sourceIPs":["10.129.2.6"],"userAgent":"Prometheus/2.23.0","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-03-08T18:02:04.086545Z","stageTimestamp":"2021-03-08T18:02:04.107102Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"prometheus-k8s\" of ClusterRole \"prometheus-k8s\" to ServiceAccount \"prometheus-k8s/openshift-monitoring\""}} View the Kubernetes API server audit logs: List the Kubernetes API server audit logs that are available for each control plane node: USD oc adm node-logs --role=master --path=kube-apiserver/ Example output ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T14-07-27.129.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T19-24-22.620.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T18-37-07.511.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log View a specific Kubernetes API server audit log by providing the node name and the log name: USD oc adm node-logs <node_name> --path=kube-apiserver/<log_name> For example: USD oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=kube-apiserver/audit-2021-03-09T14-07-27.129.log Example output {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"cfce8a0b-b5f5-4365-8c9f-79c1227d10f9","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa","verb":"get","user":{"username":"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator","uid":"2574b041-f3c8-44e6-a057-baef7aa81516","groups":["system:serviceaccounts","system:serviceaccounts:openshift-kube-scheduler-operator","system:authenticated"]},"sourceIPs":["10.128.0.8"],"userAgent":"cluster-kube-scheduler-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat","objectRef":{"resource":"serviceaccounts","namespace":"openshift-kube-scheduler","name":"openshift-kube-scheduler-sa","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-03-08T18:06:42.512619Z","stageTimestamp":"2021-03-08T18:06:42.516145Z","annotations":{"authentication.k8s.io/legacy-token":"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator","authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:openshift:operator:cluster-kube-scheduler-operator\" of ClusterRole \"cluster-admin\" to ServiceAccount \"openshift-kube-scheduler-operator/openshift-kube-scheduler-operator\""}} View the OpenShift OAuth API server audit logs: List the OpenShift OAuth API server audit logs that are available for each control plane node: USD oc adm node-logs --role=master --path=oauth-apiserver/ Example output ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T13-06-26.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T18-23-21.619.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T17-36-06.510.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log View a specific OpenShift OAuth API server audit log by providing the node name and the log name: USD oc adm node-logs <node_name> --path=oauth-apiserver/<log_name> For example: USD oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-apiserver/audit-2021-03-09T13-06-26.128.log Example output {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"dd4c44e2-3ea1-4830-9ab7-c91a5f1388d6","stage":"ResponseComplete","requestURI":"/apis/user.openshift.io/v1/users/~","verb":"get","user":{"username":"system:serviceaccount:openshift-monitoring:prometheus-k8s","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"]},"sourceIPs":["10.0.32.4","10.128.0.1"],"userAgent":"dockerregistry/v0.0.0 (linux/amd64) kubernetes/USDFormat","objectRef":{"resource":"users","name":"~","apiGroup":"user.openshift.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-03-08T17:47:43.653187Z","stageTimestamp":"2021-03-08T17:47:43.660187Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"basic-users\" of ClusterRole \"basic-user\" to Group \"system:authenticated\""}} View the OpenShift OAuth server audit logs: List the OpenShift OAuth server audit logs that are available for each control plane node: USD oc adm node-logs --role=master --path=oauth-server/ Example output ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2022-05-11T18-57-32.395.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2022-05-11T19-07-07.021.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2022-05-11T19-06-51.844.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log View a specific OpenShift OAuth server audit log by providing the node name and the log name: USD oc adm node-logs <node_name> --path=oauth-server/<log_name> For example: USD oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-server/audit-2022-05-11T18-57-32.395.log Example output {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"13c20345-f33b-4b7d-b3b6-e7793f805621","stage":"ResponseComplete","requestURI":"/login","verb":"post","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["10.128.2.6"],"userAgent":"Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0","responseStatus":{"metadata":{},"code":302},"requestReceivedTimestamp":"2022-05-11T17:31:16.280155Z","stageTimestamp":"2022-05-11T17:31:16.297083Z","annotations":{"authentication.openshift.io/decision":"error","authentication.openshift.io/username":"kubeadmin","authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} The possible values for the authentication.openshift.io/decision annotation are allow , deny , or error . 10.3. Filtering audit logs You can use jq or another JSON parsing tool to filter the API server audit logs. Note The amount of information logged to the API server audit logs is controlled by the audit log policy that is set. The following procedure provides examples of using jq to filter audit logs on control plane node node-1.example.com . See the jq Manual for detailed information on using jq . Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed jq . Procedure Filter OpenShift API server audit logs by user: USD oc adm node-logs node-1.example.com \ --path=openshift-apiserver/audit.log \ | jq 'select(.user.username == "myusername")' Filter OpenShift API server audit logs by user agent: USD oc adm node-logs node-1.example.com \ --path=openshift-apiserver/audit.log \ | jq 'select(.userAgent == "cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat")' Filter Kubernetes API server audit logs by a certain API version and only output the user agent: USD oc adm node-logs node-1.example.com \ --path=kube-apiserver/audit.log \ | jq 'select(.requestURI | startswith("/apis/apiextensions.k8s.io/v1beta1")) | .userAgent' Filter OpenShift OAuth API server audit logs by excluding a verb: USD oc adm node-logs node-1.example.com \ --path=oauth-apiserver/audit.log \ | jq 'select(.verb != "get")' Filter OpenShift OAuth server audit logs by events that identified a username and failed with an error: USD oc adm node-logs node-1.example.com \ --path=oauth-server/audit.log \ | jq 'select(.annotations["authentication.openshift.io/username"] != null and .annotations["authentication.openshift.io/decision"] == "error")' 10.4. Gathering audit logs You can use the must-gather tool to collect the audit logs for debugging your cluster, which you can review or send to Red Hat Support. Procedure Run the oc adm must-gather command with -- /usr/bin/gather_audit_logs : USD oc adm must-gather -- /usr/bin/gather_audit_logs Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1 1 Replace must-gather-local.472290403699006248 with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 10.5. Additional resources Must-gather tool API audit log event structure Configuring the audit log policy
[ "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"ad209ce1-fec7-4130-8192-c4cc63f1d8cd\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s\",\"verb\":\"update\",\"user\":{\"username\":\"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client\",\"uid\":\"dd4997e3-d565-4e37-80f8-7fc122ccd785\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-kube-controller-manager\",\"system:authenticated\"]},\"sourceIPs\":[\"::1\"],\"userAgent\":\"cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"configmaps\",\"namespace\":\"openshift-kube-controller-manager\",\"name\":\"cert-recovery-controller-lock\",\"uid\":\"5c57190b-6993-425d-8101-8337e48c7548\",\"apiVersion\":\"v1\",\"resourceVersion\":\"574307\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2020-04-02T08:27:20.200962Z\",\"stageTimestamp\":\"2020-04-02T08:27:20.206710Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:openshift:operator:kube-controller-manager-recovery\\\" of ClusterRole \\\"cluster-admin\\\" to ServiceAccount \\\"localhost-recovery-client/openshift-kube-controller-manager\\\"\"}}", "oc adm node-logs --role=master --path=openshift-apiserver/", "ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T00-12-19.834.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T00-11-49.835.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T00-13-00.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log", "oc adm node-logs <node_name> --path=openshift-apiserver/<log_name>", "oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=openshift-apiserver/audit-2021-03-09T00-12-19.834.log", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"381acf6d-5f30-4c7d-8175-c9c317ae5893\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/metrics\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-monitoring:prometheus-k8s\",\"uid\":\"825b60a0-3976-4861-a342-3b2b561e8f82\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-monitoring\",\"system:authenticated\"]},\"sourceIPs\":[\"10.129.2.6\"],\"userAgent\":\"Prometheus/2.23.0\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T18:02:04.086545Z\",\"stageTimestamp\":\"2021-03-08T18:02:04.107102Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"prometheus-k8s\\\" of ClusterRole \\\"prometheus-k8s\\\" to ServiceAccount \\\"prometheus-k8s/openshift-monitoring\\\"\"}}", "oc adm node-logs --role=master --path=kube-apiserver/", "ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T14-07-27.129.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T19-24-22.620.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T18-37-07.511.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log", "oc adm node-logs <node_name> --path=kube-apiserver/<log_name>", "oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=kube-apiserver/audit-2021-03-09T14-07-27.129.log", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"cfce8a0b-b5f5-4365-8c9f-79c1227d10f9\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\",\"uid\":\"2574b041-f3c8-44e6-a057-baef7aa81516\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-kube-scheduler-operator\",\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.8\"],\"userAgent\":\"cluster-kube-scheduler-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"serviceaccounts\",\"namespace\":\"openshift-kube-scheduler\",\"name\":\"openshift-kube-scheduler-sa\",\"apiVersion\":\"v1\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T18:06:42.512619Z\",\"stageTimestamp\":\"2021-03-08T18:06:42.516145Z\",\"annotations\":{\"authentication.k8s.io/legacy-token\":\"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\",\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:openshift:operator:cluster-kube-scheduler-operator\\\" of ClusterRole \\\"cluster-admin\\\" to ServiceAccount \\\"openshift-kube-scheduler-operator/openshift-kube-scheduler-operator\\\"\"}}", "oc adm node-logs --role=master --path=oauth-apiserver/", "ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T13-06-26.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T18-23-21.619.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T17-36-06.510.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log", "oc adm node-logs <node_name> --path=oauth-apiserver/<log_name>", "oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-apiserver/audit-2021-03-09T13-06-26.128.log", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"dd4c44e2-3ea1-4830-9ab7-c91a5f1388d6\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/apis/user.openshift.io/v1/users/~\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-monitoring:prometheus-k8s\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-monitoring\",\"system:authenticated\"]},\"sourceIPs\":[\"10.0.32.4\",\"10.128.0.1\"],\"userAgent\":\"dockerregistry/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"users\",\"name\":\"~\",\"apiGroup\":\"user.openshift.io\",\"apiVersion\":\"v1\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T17:47:43.653187Z\",\"stageTimestamp\":\"2021-03-08T17:47:43.660187Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"basic-users\\\" of ClusterRole \\\"basic-user\\\" to Group \\\"system:authenticated\\\"\"}}", "oc adm node-logs --role=master --path=oauth-server/", "ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2022-05-11T18-57-32.395.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2022-05-11T19-07-07.021.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2022-05-11T19-06-51.844.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log", "oc adm node-logs <node_name> --path=oauth-server/<log_name>", "oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-server/audit-2022-05-11T18-57-32.395.log", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"13c20345-f33b-4b7d-b3b6-e7793f805621\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/login\",\"verb\":\"post\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.128.2.6\"],\"userAgent\":\"Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0\",\"responseStatus\":{\"metadata\":{},\"code\":302},\"requestReceivedTimestamp\":\"2022-05-11T17:31:16.280155Z\",\"stageTimestamp\":\"2022-05-11T17:31:16.297083Z\",\"annotations\":{\"authentication.openshift.io/decision\":\"error\",\"authentication.openshift.io/username\":\"kubeadmin\",\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}", "oc adm node-logs node-1.example.com --path=openshift-apiserver/audit.log | jq 'select(.user.username == \"myusername\")'", "oc adm node-logs node-1.example.com --path=openshift-apiserver/audit.log | jq 'select(.userAgent == \"cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\")'", "oc adm node-logs node-1.example.com --path=kube-apiserver/audit.log | jq 'select(.requestURI | startswith(\"/apis/apiextensions.k8s.io/v1beta1\")) | .userAgent'", "oc adm node-logs node-1.example.com --path=oauth-apiserver/audit.log | jq 'select(.verb != \"get\")'", "oc adm node-logs node-1.example.com --path=oauth-server/audit.log | jq 'select(.annotations[\"authentication.openshift.io/username\"] != null and .annotations[\"authentication.openshift.io/decision\"] == \"error\")'", "oc adm must-gather -- /usr/bin/gather_audit_logs", "tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/security_and_compliance/audit-log-view
Chapter 2. Working with pods
Chapter 2. Working with pods 2.1. Using pods A pod is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. 2.1.1. Understanding pods Pods are the rough equivalent of a machine instance (physical or virtual) to a Container. Each pod is allocated its own internal IP address, therefore owning its entire port space, and containers within pods can share their local storage and networking. Pods have a lifecycle; they are defined, then they are assigned to run on a node, then they run until their container(s) exit or they are removed for some other reason. Pods, depending on policy and exit code, might be removed after exiting, or can be retained to enable access to the logs of their containers. OpenShift Container Platform treats pods as largely immutable; changes cannot be made to a pod definition while it is running. OpenShift Container Platform implements changes by terminating an existing pod and recreating it with modified configuration, base image(s), or both. Pods are also treated as expendable, and do not maintain state when recreated. Therefore pods should usually be managed by higher-level controllers, rather than directly by users. Note For the maximum number of pods per OpenShift Container Platform node host, see the Cluster Limits. Warning Bare pods that are not managed by a replication controller will be not rescheduled upon node disruption. 2.1.2. Example pod configurations OpenShift Container Platform leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. The following is an example definition of a pod. It demonstrates many features of pods, most of which are discussed in other topics and thus only briefly mentioned here: Pod object definition (YAML) kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - "1000000" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: ["ALL"] resources: limits: memory: "100Mi" cpu: "1" requests: memory: "100Mi" cpu: "1" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi 1 Pods can be "tagged" with one or more labels, which can then be used to select and manage groups of pods in a single operation. The labels are stored in key/value format in the metadata hash. 2 The pod restart policy with possible values Always , OnFailure , and Never . The default value is Always . 3 OpenShift Container Platform defines a security context for containers which specifies whether they are allowed to run as privileged containers, run as a user of their choice, and more. The default context is very restrictive but administrators can modify this as needed. 4 containers specifies an array of one or more container definitions. 5 The container specifies where external storage volumes are mounted within the container. 6 Specify the volumes to provide for the pod. Volumes mount at the specified path. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 7 Each container in the pod is instantiated from its own container image. 8 The pod defines storage volumes that are available to its container(s) to use. If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . Note This pod definition does not include attributes that are filled by OpenShift Container Platform automatically after the pod is created and its lifecycle begins. The Kubernetes pod documentation has details about the functionality and purpose of pods. 2.1.3. Additional resources For more information on pods and storage see Understanding persistent storage and Understanding ephemeral storage . 2.2. Viewing pods As an administrator, you can view the pods in your cluster and to determine the health of those pods and the cluster as a whole. 2.2.1. About pods OpenShift Container Platform leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance (physical or virtual) to a container. You can view a list of pods associated with a specific project or view usage statistics about pods. 2.2.2. Viewing pods in a project You can view a list of pods associated with the current project, including the number of replica, the current status, number or restarts and the age of the pod. Procedure To view the pods in a project: Change to the project: USD oc project <project-name> Run the following command: USD oc get pods For example: USD oc get pods Example output NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m Add the -o wide flags to view the pod IP address and the node where the pod is located. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none> 2.2.3. Viewing pod usage statistics You can display usage statistics about pods, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: Run the following command: USD oc adm top pods For example: USD oc adm top pods -n openshift-console Example output NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi Run the following command to view the usage statistics for pods with labels: USD oc adm top pod --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . For example: USD oc adm top pod --selector='name=my-pod' 2.2.4. Viewing resource logs You can view the log for various resources in the OpenShift CLI ( oc ) and web console. Logs read from the tail, or end, of the log. Prerequisites Access to the OpenShift CLI ( oc ). Procedure (UI) In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Note Some resources, such as builds, do not have pods to query directly. In such instances, you can locate the Logs link on the Details page for the resource. Select a project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . Procedure (CLI) View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. For example: USD oc logs ruby-58cd97df55-mww7r USD oc logs -f ruby-57f7f4855b-znl92 -c ruby The contents of log files are printed out. View the log for a specific resource: USD oc logs <object_type>/<resource_name> 1 1 Specifies the resource type and name. For example: USD oc logs deployment/ruby The contents of log files are printed out. 2.3. Configuring an OpenShift Container Platform cluster for pods As an administrator, you can create and maintain an efficient cluster for pods. By keeping your cluster efficient, you can provide a better environment for your developers using such tools as what a pod does when it exits, ensuring that the required number of pods is always running, when to restart pods designed to run only once, limit the bandwidth available to pods, and how to keep pods running during disruptions. 2.3.1. Configuring how pods behave after restart A pod restart policy determines how OpenShift Container Platform responds when Containers in that pod exit. The policy applies to all Containers in that pod. The possible values are: Always - Tries restarting a successfully exited Container on the pod continuously, with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. The default is Always . OnFailure - Tries restarting a failed Container on the pod with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. Never - Does not try to restart exited or failed Containers on the pod. Pods immediately fail and exit. After the pod is bound to a node, the pod will never be bound to another node. This means that a controller is necessary in order for a pod to survive node failure: Condition Controller Type Restart Policy Pods that are expected to terminate (such as batch computations) Job OnFailure or Never Pods that are expected to not terminate (such as web servers) Replication controller Always . Pods that must run one-per-machine Daemon set Any If a Container on a pod fails and the restart policy is set to OnFailure , the pod stays on the node and the Container is restarted. If you do not want the Container to restart, use a restart policy of Never . If an entire pod fails, OpenShift Container Platform starts a new pod. Developers must address the possibility that applications might be restarted in a new pod. In particular, applications must handle temporary files, locks, incomplete output, and so forth caused by runs. Note Kubernetes architecture expects reliable endpoints from cloud providers. When a cloud provider is down, the kubelet prevents OpenShift Container Platform from restarting. If the underlying cloud provider endpoints are not reliable, do not install a cluster using cloud provider integration. Install the cluster as if it was in a no-cloud environment. It is not recommended to toggle cloud provider integration on or off in an installed cluster. For details on how OpenShift Container Platform uses restart policy with failed Containers, see the Example States in the Kubernetes documentation. 2.3.2. Limiting the bandwidth available to pods You can apply quality-of-service traffic shaping to a pod and effectively limit its available bandwidth. Egress traffic (from the pod) is handled by policing, which simply drops packets in excess of the configured rate. Ingress traffic (to the pod) is handled by shaping queued packets to effectively handle data. The limits you place on a pod do not affect the bandwidth of other pods. Procedure To limit the bandwidth on a pod: Write an object definition JSON file, and specify the data traffic speed using kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations. For example, to limit both pod egress and ingress bandwidth to 10M/s: Limited Pod object definition { "kind": "Pod", "spec": { "containers": [ { "image": "openshift/hello-openshift", "name": "hello-openshift" } ] }, "apiVersion": "v1", "metadata": { "name": "iperf-slow", "annotations": { "kubernetes.io/ingress-bandwidth": "10M", "kubernetes.io/egress-bandwidth": "10M" } } } Create the pod using the object definition: USD oc create -f <file_or_dir_path> 2.3.3. Understanding how to use pod disruption budgets to specify the number of pods that must be up A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance. PodDisruptionBudget is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures). A PodDisruptionBudget object's configuration consists of the following key parts: A label selector, which is a label query over a set of pods. An availability level, which specifies the minimum number of pods that must be available simultaneously, either: minAvailable is the number of pods must always be available, even during a disruption. maxUnavailable is the number of pods can be unavailable during a disruption. Note Available refers to the number of pods that has condition Ready=True . Ready=True refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services. A maxUnavailable of 0% or 0 or a minAvailable of 100% or equal to the number of replicas is permitted but can block nodes from being drained. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. You can check for pod disruption budgets across all projects with the following: USD oc get poddisruptionbudget --all-namespaces Note The following example contains some values that are specific to OpenShift Container Platform on AWS. Example output NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #... The PodDisruptionBudget is considered healthy when there are at least minAvailable pods running in the system. Every pod above that limit can be evicted. Note Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements. 2.3.3.1. Specifying the number of pods that must be up with pod disruption budgets You can use a PodDisruptionBudget object to specify the minimum number or percentage of replicas that must be up at a time. Procedure To configure a pod disruption budget: Create a YAML file with the an object definition similar to the following: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Or: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Run the following command to add the object to project: USD oc create -f </path/to/file> -n <project_name> 2.3.3.2. Specifying the eviction policy for unhealthy pods When you use pod disruption budgets (PDBs) to specify how many pods must be available simultaneously, you can also define the criteria for how unhealthy pods are considered for eviction. You can choose one of the following policies: IfHealthyBudget Running pods that are not yet healthy can be evicted only if the guarded application is not disrupted. AlwaysAllow Running pods that are not yet healthy can be evicted regardless of whether the criteria in the pod disruption budget is met. This policy can help evict malfunctioning applications, such as ones with pods stuck in the CrashLoopBackOff state or failing to report the Ready status. Note It is recommended to set the unhealthyPodEvictionPolicy field to AlwaysAllow in the PodDisruptionBudget object to support the eviction of misbehaving applications during a node drain. The default behavior is to wait for the application pods to become healthy before the drain can proceed. Procedure Create a YAML file that defines a PodDisruptionBudget object and specify the unhealthy pod eviction policy: Example pod-disruption-budget.yaml file apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1 1 Choose either IfHealthyBudget or AlwaysAllow as the unhealthy pod eviction policy. The default is IfHealthyBudget when the unhealthyPodEvictionPolicy field is empty. Create the PodDisruptionBudget object by running the following command: USD oc create -f pod-disruption-budget.yaml With a PDB that has the AlwaysAllow unhealthy pod eviction policy set, you can now drain nodes and evict the pods for a malfunctioning application guarded by this PDB. Additional resources Enabling features using feature gates Unhealthy Pod Eviction Policy in the Kubernetes documentation 2.3.4. Preventing pod removal using critical pods There are a number of core components that are critical to a fully functional cluster, but, run on a regular cluster node rather than the master. A cluster might stop working properly if a critical add-on is evicted. Pods marked as critical are not allowed to be evicted. Procedure To make a pod critical: Create a Pod spec or edit existing pods to include the system-cluster-critical priority class: apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1 # ... 1 Default priority class for pods that should never be evicted from a node. Alternatively, you can specify system-node-critical for pods that are important to the cluster but can be removed if necessary. Create the pod: USD oc create -f <file-name>.yaml 2.3.5. Reducing pod timeouts when using persistent volumes with high file counts If a storage volume contains many files (~1,000,000 or greater), you might experience pod timeouts. This can occur because, when volumes are mounted, OpenShift Container Platform recursively changes the ownership and permissions of the contents of each volume in order to match the fsGroup specified in a pod's securityContext . For large volumes, checking and changing the ownership and permissions can be time consuming, resulting in a very slow pod startup. You can reduce this delay by applying one of the following workarounds: Use a security context constraint (SCC) to skip the SELinux relabeling for a volume. Use the fsGroupChangePolicy field inside an SCC to control the way that OpenShift Container Platform checks and manages ownership and permissions for a volume. Use the Cluster Resource Override Operator to automatically apply an SCC to skip the SELinux relabeling. Use a runtime class to skip the SELinux relabeling for a volume. For information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . 2.4. Automatically scaling pods with the horizontal pod autoscaler As a developer, you can use a horizontal pod autoscaler (HPA) to specify how OpenShift Container Platform should automatically increase or decrease the scale of a replication controller or deployment configuration, based on metrics collected from the pods that belong to that replication controller or deployment configuration. You can create an HPA for any deployment, deployment config, replica set, replication controller, or stateful set. For information on scaling pods based on custom metrics, see Automatically scaling pods based on custom metrics . Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. For more information on these objects, see Understanding deployments . 2.4.1. Understanding horizontal pod autoscalers You can create a horizontal pod autoscaler to specify the minimum and maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target. After you create a horizontal pod autoscaler, OpenShift Container Platform begins to query the CPU and/or memory resource metrics on the pods. When these metrics are available, the horizontal pod autoscaler computes the ratio of the current metric utilization with the desired metric utilization, and scales up or down accordingly. The query and scaling occurs at a regular interval, but can take one to two minutes before metrics become available. For replication controllers, this scaling corresponds directly to the replicas of the replication controller. For deployment configurations, scaling corresponds directly to the replica count of the deployment configuration. Note that autoscaling applies only to the latest deployment in the Complete phase. OpenShift Container Platform automatically accounts for resources and prevents unnecessary autoscaling during resource spikes, such as during start up. Pods in the unready state have 0 CPU usage when scaling up and the autoscaler ignores the pods when scaling down. Pods without known metrics have 0% CPU usage when scaling up and 100% CPU when scaling down. This allows for more stability during the HPA decision. To use this feature, you must configure readiness checks to determine if a new pod is ready for use. To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. 2.4.1.1. Supported metrics The following metrics are supported by horizontal pod autoscalers: Table 2.1. Metrics Metric Description API version CPU utilization Number of CPU cores used. Can be used to calculate a percentage of the pod's requested CPU. autoscaling/v1 , autoscaling/v2 Memory utilization Amount of memory used. Can be used to calculate a percentage of the pod's requested memory. autoscaling/v2 Important For memory-based autoscaling, memory usage must increase and decrease proportionally to the replica count. On average: An increase in replica count must lead to an overall decrease in memory (working set) usage per-pod. A decrease in replica count must lead to an overall increase in per-pod memory usage. Use the OpenShift Container Platform web console to check the memory behavior of your application and ensure that your application meets these requirements before using memory-based autoscaling. The following example shows autoscaling for the hello-node Deployment object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods increase to 7: USD oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75 Example output horizontalpodautoscaler.autoscaling/hello-node autoscaled Sample YAML to create an HPA for the hello-node deployment object with minReplicas set to 3 apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hello-node namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0 After you create the HPA, you can view the new state of the deployment by running the following command: USD oc get deployment hello-node There are now 5 pods in the deployment: Example output NAME REVISION DESIRED CURRENT TRIGGERED BY hello-node 1 5 5 config 2.4.2. How does the HPA work? The horizontal pod autoscaler (HPA) extends the concept of pod auto-scaling. The HPA lets you create and manage a group of load-balanced nodes. The HPA automatically increases or decreases the number of pods when a given CPU or memory threshold is crossed. Figure 2.1. High level workflow of the HPA The HPA is an API resource in the Kubernetes autoscaling API group. The autoscaler works as a control loop with a default of 15 seconds for the sync period. During this period, the controller manager queries the CPU, memory utilization, or both, against what is defined in the YAML file for the HPA. The controller manager obtains the utilization metrics from the resource metrics API for per-pod resource metrics like CPU or memory, for each pod that is targeted by the HPA. If a utilization value target is set, the controller calculates the utilization value as a percentage of the equivalent resource request on the containers in each pod. The controller then takes the average of utilization across all targeted pods and produces a ratio that is used to scale the number of desired replicas. The HPA is configured to fetch metrics from metrics.k8s.io , which is provided by the metrics server. Because of the dynamic nature of metrics evaluation, the number of replicas can fluctuate during scaling for a group of replicas. Note To implement the HPA, all targeted pods must have a resource request set on their containers. 2.4.3. About requests and limits The scheduler uses the resource request that you specify for containers in a pod, to decide which node to place the pod on. The kubelet enforces the resource limit that you specify for a container to ensure that the container is not allowed to use more than the specified limit. The kubelet also reserves the request amount of that system resource specifically for that container to use. How to use resource metrics? In the pod specifications, you must specify the resource requests, such as CPU and memory. The HPA uses this specification to determine the resource utilization and then scales the target up or down. For example, the HPA object uses the following metric source: type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60 In this example, the HPA keeps the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current resource usage to the requested resource of the pod. 2.4.4. Best practices All pods must have resource requests configured The HPA makes a scaling decision based on the observed CPU or memory utilization values of pods in an OpenShift Container Platform cluster. Utilization values are calculated as a percentage of the resource requests of each pod. Missing resource request values can affect the optimal performance of the HPA. Configure the cool down period During horizontal pod autoscaling, there might be a rapid scaling of events without a time gap. Configure the cool down period to prevent frequent replica fluctuations. You can specify a cool down period by configuring the stabilizationWindowSeconds field. The stabilization window is used to restrict the fluctuation of replicas count when the metrics used for scaling keep fluctuating. The autoscaling algorithm uses this window to infer a desired state and avoid unwanted changes to workload scale. For example, a stabilization window is specified for the scaleDown field: behavior: scaleDown: stabilizationWindowSeconds: 300 In the above example, all desired states for the past 5 minutes are considered. This approximates a rolling maximum, and avoids having the scaling algorithm frequently remove pods only to trigger recreating an equivalent pod just moments later. 2.4.4.1. Scaling policies The autoscaling/v2 API allows you to add scaling policies to a horizontal pod autoscaler. A scaling policy controls how the OpenShift Container Platform horizontal pod autoscaler (HPA) scales pods. Scaling policies allow you to restrict the rate that HPAs scale pods up or down by setting a specific number or specific percentage to scale in a specified period of time. You can also define a stabilization window , which uses previously computed desired states to control scaling if the metrics are fluctuating. You can create multiple policies for the same scaling direction, and determine which policy is used, based on the amount of change. You can also restrict the scaling by timed iterations. The HPA scales pods during an iteration, then performs scaling, as needed, in further iterations. Sample HPA object with a scaling policy apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0 ... 1 Specifies the direction for the scaling policy, either scaleDown or scaleUp . This example creates a policy for scaling down. 2 Defines the scaling policy. 3 Determines if the policy scales by a specific number of pods or a percentage of pods during each iteration. The default value is pods . 4 Limits the amount of scaling, either the number of pods or percentage of pods, during each iteration. There is no default value for scaling down by number of pods. 5 Determines the length of a scaling iteration. The default value is 15 seconds. 6 The default value for scaling down by percentage is 100%. 7 Determines which policy to use first, if multiple policies are defined. Specify Max to use the policy that allows the highest amount of change, Min to use the policy that allows the lowest amount of change, or Disabled to prevent the HPA from scaling in that policy direction. The default value is Max . 8 Determines the time period the HPA should look back at desired states. The default value is 0 . 9 This example creates a policy for scaling up. 10 Limits the amount of scaling up by the number of pods. The default value for scaling up the number of pods is 4%. 11 Limits the amount of scaling up by the percentage of pods. The default value for scaling up by percentage is 100%. Example policy for scaling down apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: ... minReplicas: 20 ... behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled In this example, when the number of pods is greater than 40, the percent-based policy is used for scaling down, as that policy results in a larger change, as required by the selectPolicy . If there are 80 pod replicas, in the first iteration the HPA reduces the pods by 8, which is 10% of the 80 pods (based on the type: Percent and value: 10 parameters), over one minute ( periodSeconds: 60 ). For the iteration, the number of pods is 72. The HPA calculates that 10% of the remaining pods is 7.2, which it rounds up to 8 and scales down 8 pods. On each subsequent iteration, the number of pods to be scaled is re-calculated based on the number of remaining pods. When the number of pods falls below 40, the pods-based policy is applied, because the pod-based number is greater than the percent-based number. The HPA reduces 4 pods at a time ( type: Pods and value: 4 ), over 30 seconds ( periodSeconds: 30 ), until there are 20 replicas remaining ( minReplicas ). The selectPolicy: Disabled parameter prevents the HPA from scaling up the pods. You can manually scale up by adjusting the number of replicas in the replica set or deployment set, if needed. If set, you can view the scaling policy by using the oc edit command: USD oc edit hpa hpa-resource-metrics-memory Example output apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior:\ '{"ScaleUp":{"StabilizationWindowSeconds":0,"SelectPolicy":"Max","Policies":[{"Type":"Pods","Value":4,"PeriodSeconds":15},{"Type":"Percent","Value":100,"PeriodSeconds":15}]},\ "ScaleDown":{"StabilizationWindowSeconds":300,"SelectPolicy":"Min","Policies":[{"Type":"Pods","Value":4,"PeriodSeconds":60},{"Type":"Percent","Value":10,"PeriodSeconds":60}]}}' ... 2.4.5. Creating a horizontal pod autoscaler by using the web console From the web console, you can create a horizontal pod autoscaler (HPA) that specifies the minimum and maximum number of pods you want to run on a Deployment or DeploymentConfig object. You can also define the amount of CPU or memory usage that your pods should target. Note An HPA cannot be added to deployments that are part of an Operator-backed service, Knative service, or Helm chart. Procedure To create an HPA in the web console: In the Topology view, click the node to reveal the side pane. From the Actions drop-down list, select Add HorizontalPodAutoscaler to open the Add HorizontalPodAutoscaler form. Figure 2.2. Add HorizontalPodAutoscaler From the Add HorizontalPodAutoscaler form, define the name, minimum and maximum pod limits, the CPU and memory usage, and click Save . Note If any of the values for CPU and memory usage are missing, a warning is displayed. To edit an HPA in the web console: In the Topology view, click the node to reveal the side pane. From the Actions drop-down list, select Edit HorizontalPodAutoscaler to open the Edit Horizontal Pod Autoscaler form. From the Edit Horizontal Pod Autoscaler form, edit the minimum and maximum pod limits and the CPU and memory usage, and click Save . Note While creating or editing the horizontal pod autoscaler in the web console, you can switch from Form view to YAML view . To remove an HPA in the web console: In the Topology view, click the node to reveal the side panel. From the Actions drop-down list, select Remove HorizontalPodAutoscaler . In the confirmation pop-up window, click Remove to remove the HPA. 2.4.6. Creating a horizontal pod autoscaler for CPU utilization by using the CLI Using the OpenShift Container Platform CLI, you can create a horizontal pod autoscaler (HPA) to automatically scale an existing Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet object. The HPA scales the pods associated with that object to maintain the CPU usage you specify. Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. When autoscaling for CPU utilization, you can use the oc autoscale command and specify the minimum and maximum number of pods you want to run at any given time and the average CPU utilization your pods should target. If you do not specify a minimum, the pods are given default values from the OpenShift Container Platform server. To autoscale for a specific CPU value, create a HorizontalPodAutoscaler object with the target CPU and pod limits. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> Procedure To create a horizontal pod autoscaler for CPU utilization: Perform one of the following: To scale based on the percent of CPU utilization, create a HorizontalPodAutoscaler object for an existing object: USD oc autoscale <object_type>/<name> \ 1 --min <number> \ 2 --max <number> \ 3 --cpu-percent=<percent> 4 1 Specify the type and name of the object to autoscale. The object must exist and be a Deployment , DeploymentConfig / dc , ReplicaSet / rs , ReplicationController / rc , or StatefulSet . 2 Optionally, specify the minimum number of replicas when scaling down. 3 Specify the maximum number of replicas when scaling up. 4 Specify the target average CPU utilization over all the pods, represented as a percent of requested CPU. If not specified or negative, a default autoscaling policy is used. For example, the following command shows autoscaling for the hello-node deployment object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods will increase to 7: USD oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75 To scale for a specific CPU value, create a YAML file similar to the following for an existing object: Create a YAML file similar to the following: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a Deployment , ReplicaSet , Statefulset object, use apps/v1 . For a ReplicationController , use v1 . For a DeploymentConfig , use apps.openshift.io/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig / dc , ReplicaSet / rs , ReplicationController / rc , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify cpu for CPU utilization. 10 Set to AverageValue . 11 Set to averageValue with the targeted CPU value. Create the horizontal pod autoscaler: USD oc create -f <file-name>.yaml Verify that the horizontal pod autoscaler was created: USD oc get hpa cpu-autoscale Example output NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m 2.4.7. Creating a horizontal pod autoscaler object for memory utilization by using the CLI Using the OpenShift Container Platform CLI, you can create a horizontal pod autoscaler (HPA) to automatically scale an existing Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet object. The HPA scales the pods associated with that object to maintain the average memory utilization you specify, either a direct value or a percentage of requested memory. Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all pods. For memory utilization, you can specify the minimum and maximum number of pods and the average memory utilization your pods should target. If you do not specify a minimum, the pods are given default values from the OpenShift Container Platform server. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler Example output Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none> Procedure To create a horizontal pod autoscaler for memory utilization: Create a YAML file for one of the following: To scale for a specific memory value, create a HorizontalPodAutoscaler object similar to the following for an existing object: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a Deployment , ReplicaSet , or Statefulset object, use apps/v1 . For a ReplicationController , use v1 . For a DeploymentConfig , use apps.openshift.io/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify memory for memory utilization. 10 Set the type to AverageValue . 11 Specify averageValue and a specific memory value. 12 Optional: Specify a scaling policy to control the rate of scaling up or down. To scale for a percentage, create a HorizontalPodAutoscaler object similar to the following for an existing object: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a ReplicationController, use v1 . For a DeploymentConfig, use apps.openshift.io/v1 . For a Deployment, ReplicaSet, Statefulset object, use apps/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify memory for memory utilization. 10 Set to Utilization . 11 Specify averageUtilization and a target average memory utilization over all the pods, represented as a percent of requested memory. The target pods must have memory requests configured. 12 Optional: Specify a scaling policy to control the rate of scaling up or down. Create the horizontal pod autoscaler: USD oc create -f <file-name>.yaml For example: USD oc create -f hpa.yaml Example output horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created Verify that the horizontal pod autoscaler was created: USD oc get hpa hpa-resource-metrics-memory Example output NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m USD oc describe hpa hpa-resource-metrics-memory Example output Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target 2.4.8. Understanding horizontal pod autoscaler status conditions by using the CLI You can use the status conditions set to determine whether or not the horizontal pod autoscaler (HPA) is able to scale and whether or not it is currently restricted in any way. The HPA status conditions are available with the v2 version of the autoscaling API. The HPA responds with the following status conditions: The AbleToScale condition indicates whether HPA is able to fetch and update metrics, as well as whether any backoff-related conditions could prevent scaling. A True condition indicates scaling is allowed. A False condition indicates scaling is not allowed for the reason specified. The ScalingActive condition indicates whether the HPA is enabled (for example, the replica count of the target is not zero) and is able to calculate desired metrics. A True condition indicates metrics is working properly. A False condition generally indicates a problem with fetching metrics. The ScalingLimited condition indicates that the desired scale was capped by the maximum or minimum of the horizontal pod autoscaler. A True condition indicates that you need to raise or lower the minimum or maximum replica count in order to scale. A False condition indicates that the requested scaling is allowed. USD oc describe hpa cm-test Example output Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) "http_requests" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events: 1 The horizontal pod autoscaler status messages. The following is an example of a pod that is unable to scale: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind "ReplicationController" in group "apps" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind "ReplicationController" in group "apps" The following is an example of a pod that could not obtain the needed metrics for scaling: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API The following is an example of a pod where the requested autoscaling was less than the required minimums: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range 2.4.8.1. Viewing horizontal pod autoscaler status conditions by using the CLI You can view the status conditions set on a pod by the horizontal pod autoscaler (HPA). Note The horizontal pod autoscaler status conditions are available with the v2 version of the autoscaling API. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> Procedure To view the status conditions on a pod, use the following command with the name of the pod: USD oc describe hpa <pod-name> For example: USD oc describe hpa cm-test The conditions appear in the Conditions field in the output. Example output Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) "http_requests" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range 2.4.9. Additional resources For more information on replication controllers and deployment controllers, see Understanding deployments and deployment configs . For an example on the usage of HPA, see Horizontal Pod Autoscaling of Quarkus Application Based on Memory Utilization . 2.5. Automatically adjust pod resource levels with the vertical pod autoscaler The OpenShift Container Platform Vertical Pod Autoscaler Operator (VPA) automatically reviews the historic and current CPU and memory resources for containers in pods and can update the resource limits and requests based on the usage values it learns. The VPA uses individual custom resources (CR) to update all of the pods in a project that are associated with any built-in workload objects, including the following object types: Deployment DeploymentConfig StatefulSet Job DaemonSet ReplicaSet ReplicationController The VPA can also update certain custom resource object that manage pods, as described in Using the Vertical Pod Autoscaler Operator with Custom Resources . The VPA helps you to understand the optimal CPU and memory usage for your pods and can automatically maintain pod resources through the pod lifecycle. 2.5.1. About the Vertical Pod Autoscaler Operator The Vertical Pod Autoscaler Operator (VPA) is implemented as an API resource and a custom resource (CR). The CR determines the actions that the VPA Operator should take with the pods associated with a specific workload object, such as a daemon set, replication controller, and so forth, in a project. The VPA Operator consists of three components, each of which has its own pod in the VPA namespace: Recommender The VPA recommender monitors the current and past resource consumption and, based on this data, determines the optimal CPU and memory resources for the pods in the associated workload object. Updater The VPA updater checks if the pods in the associated workload object have the correct resources. If the resources are correct, the updater takes no action. If the resources are not correct, the updater kills the pod so that they can be recreated by their controllers with the updated requests. Admission controller The VPA admission controller sets the correct resource requests on each new pod in the associated workload object, whether the pod is new or was recreated by its controller due to the VPA updater actions. You can use the default recommender or use your own alternative recommender to autoscale based on your own algorithms. The default recommender automatically computes historic and current CPU and memory usage for the containers in those pods and uses this data to determine optimized resource limits and requests to ensure that these pods are operating efficiently at all times. For example, the default recommender suggests reduced resources for pods that are requesting more resources than they are using and increased resources for pods that are not requesting enough. The VPA then automatically deletes any pods that are out of alignment with these recommendations one at a time, so that your applications can continue to serve requests with no downtime. The workload objects then re-deploy the pods with the original resource limits and requests. The VPA uses a mutating admission webhook to update the pods with optimized resource limits and requests before the pods are admitted to a node. If you do not want the VPA to delete pods, you can view the VPA resource limits and requests and manually update the pods as needed. Note By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete their pods. Workload objects that specify fewer replicas than this minimum are not deleted. If you manually delete these pods, when the workload object redeploys the pods, the VPA does update the new pods with its recommendations. You can change this minimum by modifying the VerticalPodAutoscalerController object as shown in Changing the VPA minimum value . For example, if you have a pod that uses 50% of the CPU but only requests 10%, the VPA determines that the pod is consuming more CPU than requested and deletes the pod. The workload object, such as replica set, restarts the pods and the VPA updates the new pod with its recommended resources. For developers, you can use the VPA to help ensure your pods stay up during periods of high demand by scheduling pods onto nodes that have appropriate resources for each pod. Administrators can use the VPA to better utilize cluster resources, such as preventing pods from reserving more CPU resources than needed. The VPA monitors the resources that workloads are actually using and adjusts the resource requirements so capacity is available to other workloads. The VPA also maintains the ratios between limits and requests that are specified in initial container configuration. Note If you stop running the VPA or delete a specific VPA CR in your cluster, the resource requests for the pods already modified by the VPA do not change. Any new pods get the resources defined in the workload object, not the recommendations made by the VPA. 2.5.2. Installing the Vertical Pod Autoscaler Operator You can use the OpenShift Container Platform web console to install the Vertical Pod Autoscaler Operator (VPA). Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose VerticalPodAutoscaler from the list of available Operators, and click Install . On the Install Operator page, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-vertical-pod-autoscaler namespace, which is automatically created if it does not exist. Click Install . Verifiction Verify the installation by listing the VPA Operator components: Navigate to Workloads Pods . Select the openshift-vertical-pod-autoscaler project from the drop-down menu and verify that there are four pods running. Navigate to Workloads Deployments to verify that there are four deployments running. Optional: Verify the installation in the OpenShift Container Platform CLI using the following command: USD oc get all -n openshift-vertical-pod-autoscaler The output shows four pods and four deployments: Example output NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s 2.5.3. Moving the Vertical Pod Autoscaler Operator components The Vertical Pod Autoscaler Operator (VPA) and each component has its own pod in the VPA namespace on the control plane nodes. You can move the VPA Operator and component pods to infrastructure or worker nodes by adding a node selector to the VPA subscription and the VerticalPodAutoscalerController CR. You can create and use infrastructure nodes to host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure nodes are not counted toward the total number of subscriptions that are required to run the environment. For more information, see Creating infrastructure machine sets . You can move the components to the same node or separate nodes as appropriate for your organization. The following example shows the default deployment of the VPA pods to the control plane nodes. Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none> Procedure Move the VPA Operator pod by adding a node selector to the Subscription custom resource (CR) for the VPA Operator: Edit the CR: USD oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler Add a node selector to match the node role label on the node where you want to install the VPA Operator pod: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: "" name: vertical-pod-autoscaler # ... spec: config: nodeSelector: node-role.kubernetes.io/<node_role>: "" 1 1 1 Specifies the node role of the node where you want to move the VPA Operator pod. Note If the infra node uses taints, you need to add a toleration to the Subscription CR. For example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: "" name: vertical-pod-autoscaler # ... spec: config: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: 1 - key: "node-role.kubernetes.io/infra" operator: "Exists" effect: "NoSchedule" 1 Specifies a toleration for a taint on the node where you want to move the VPA Operator pod. Move each VPA component by adding node selectors to the VerticalPodAutoscaler custom resource (CR): Edit the CR: USD oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler Add node selectors to match the node role label on the node where you want to install the VPA components: apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler # ... spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: "" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: "" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: "" 3 1 Optional: Specifies the node role for the VPA admission pod. 2 Optional: Specifies the node role for the VPA recommender pod. 3 Optional: Specifies the node role for the VPA updater pod. Note If a target node uses taints, you need to add a toleration to the VerticalPodAutoscalerController CR. For example: apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler # ... spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: "" tolerations: 1 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: "" tolerations: 2 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: "" tolerations: 3 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" 1 Specifies a toleration for the admission controller pod for a taint on the node where you want to install the pod. 2 Specifies a toleration for the recommender pod for a taint on the node where you want to install the pod. 3 Specifies a toleration for the updater pod for a taint on the node where you want to install the pod. Verification You can verify the pods have moved by using the following command: USD oc get pods -n openshift-vertical-pod-autoscaler -o wide The pods are no longer deployed to the control plane nodes. Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> Additional resources Creating infrastructure machine sets 2.5.4. About Using the Vertical Pod Autoscaler Operator To use the Vertical Pod Autoscaler Operator (VPA), you create a VPA custom resource (CR) for a workload object in your cluster. The VPA learns and applies the optimal CPU and memory resources for the pods associated with that workload object. You can use a VPA with a deployment, stateful set, job, daemon set, replica set, or replication controller workload object. The VPA CR must be in the same project as the pods you want to monitor. You use the VPA CR to associate a workload object and specify which mode the VPA operates in: The Auto and Recreate modes automatically apply the VPA CPU and memory recommendations throughout the pod lifetime. The VPA deletes any pods in the project that are out of alignment with its recommendations. When redeployed by the workload object, the VPA updates the new pods with its recommendations. The Initial mode automatically applies VPA recommendations only at pod creation. The Off mode only provides recommended resource limits and requests, allowing you to manually apply the recommendations. The off mode does not update pods. You can also use the CR to opt-out certain containers from VPA evaluation and updates. For example, a pod has the following limits and requests: resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi After creating a VPA that is set to auto , the VPA learns the resource usage and deletes the pod. When redeployed, the pod uses the new resource limits and requests: resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k You can view the VPA recommendations using the following command: USD oc get vpa <vpa-name> --output yaml After a few minutes, the output shows the recommendations for CPU and memory requests, similar to the following: Example output ... status: ... recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: "274357142" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: "498558823" ... The output shows the recommended resources, target , the minimum recommended resources, lowerBound , the highest recommended resources, upperBound , and the most recent resource recommendations, uncappedTarget . The VPA uses the lowerBound and upperBound values to determine if a pod needs to be updated. If a pod has resource requests below the lowerBound values or above the upperBound values, the VPA terminates and recreates the pod with the target values. 2.5.4.1. Changing the VPA minimum value By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete and update their pods. As a result, workload objects that specify fewer than two replicas are not automatically acted upon by the VPA. The VPA does update new pods from these workload objects if the pods are restarted by some process external to the VPA. You can change this cluster-wide minimum value by modifying the minReplicas parameter in the VerticalPodAutoscalerController custom resource (CR). For example, if you set minReplicas to 3 , the VPA does not delete and update pods for workload objects that specify fewer than three replicas. Note If you set minReplicas to 1 , the VPA can delete the only pod for a workload object that specifies only one replica. You should use this setting with one-replica objects only if your workload can tolerate downtime whenever the VPA deletes a pod to adjust its resources. To avoid unwanted downtime with one-replica objects, configure the VPA CRs with the podUpdatePolicy set to Initial , which automatically updates the pod only when it is restarted by some process external to the VPA, or Off , which allows you to update the pod manually at an appropriate time for your application. Example VerticalPodAutoscalerController object apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: "2021-04-21T19:29:49Z" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: "142172" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15 1 Specify the minimum number of replicas in a workload object for the VPA to act on. Any objects with replicas fewer than the minimum are not automatically deleted by the VPA. 2.5.4.2. Automatically applying VPA recommendations To use the VPA to automatically update pods, create a VPA CR for a specific workload object with updateMode set to Auto or Recreate . When the pods are created for the workload object, the VPA constantly monitors the containers to analyze their CPU and memory needs. The VPA deletes any pods that do not meet the VPA recommendations for CPU and memory. When redeployed, the pods use the new resource limits and requests based on the VPA recommendations, honoring any pod disruption budget set for your applications. The recommendations are added to the status field of the VPA CR for reference. Note By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete their pods. Workload objects that specify fewer replicas than this minimum are not deleted. If you manually delete these pods, when the workload object redeploys the pods, the VPA does update the new pods with its recommendations. You can change this minimum by modifying the VerticalPodAutoscalerController object as shown in Changing the VPA minimum value . Example VPA CR for the Auto mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Auto or Recreate : Auto . The VPA assigns resource requests on pod creation and updates the existing pods by terminating them when the requested resources differ significantly from the new recommendation. Recreate . The VPA assigns resource requests on pod creation and updates the existing pods by terminating them when the requested resources differ significantly from the new recommendation. This mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. Note Before a VPA can determine recommendations for resources and apply the recommended resources to new pods, operating pods must exist and be running in the project. If a workload's resource usage, such as CPU and memory, is consistent, the VPA can determine recommendations for resources in a few minutes. If a workload's resource usage is inconsistent, the VPA must collect metrics at various resource usage intervals for the VPA to make an accurate recommendation. 2.5.4.3. Automatically applying VPA recommendations on pod creation To use the VPA to apply the recommended resources only when a pod is first deployed, create a VPA CR for a specific workload object with updateMode set to Initial . Then, manually delete any pods associated with the workload object that you want to use the VPA recommendations. In the Initial mode, the VPA does not delete pods and does not update the pods as it learns new resource recommendations. Example VPA CR for the Initial mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Initial" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Initial . The VPA assigns resources when pods are created and does not change the resources during the lifetime of the pod. Note Before a VPA can determine recommended resources and apply the recommendations to new pods, operating pods must exist and be running in the project. To obtain the most accurate recommendations from the VPA, wait at least 8 days for the pods to run and for the VPA to stabilize. 2.5.4.4. Manually applying VPA recommendations To use the VPA to only determine the recommended CPU and memory values, create a VPA CR for a specific workload object with updateMode set to off . When the pods are created for that workload object, the VPA analyzes the CPU and memory needs of the containers and records those recommendations in the status field of the VPA CR. The VPA does not update the pods as it determines new resource recommendations. Example VPA CR for the Off mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Off" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Off . You can view the recommendations using the following command. USD oc get vpa <vpa-name> --output yaml With the recommendations, you can edit the workload object to add CPU and memory requests, then delete and redeploy the pods using the recommended resources. Note Before a VPA can determine recommended resources and apply the recommendations to new pods, operating pods must exist and be running in the project. To obtain the most accurate recommendations from the VPA, wait at least 8 days for the pods to run and for the VPA to stabilize. 2.5.4.5. Exempting containers from applying VPA recommendations If your workload object has multiple containers and you do not want the VPA to evaluate and act on all of the containers, create a VPA CR for a specific workload object and add a resourcePolicy to opt-out specific containers. When the VPA updates the pods with recommended resources, any containers with a resourcePolicy are not updated and the VPA does not present recommendations for those containers in the pod. apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: "Off" 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Auto , Recreate , or Off . The Recreate mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. 4 Specify the containers you want to opt-out and set mode to Off . For example, a pod has two containers, the same resource requests and limits: # ... spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: "1" memory: 500Mi requests: cpu: 500m memory: 100Mi # ... After launching a VPA CR with the backend container set to opt-out, the VPA terminates and recreates the pod with the recommended resources applied only to the frontend container: ... spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k ... name: backend resources: limits: cpu: "1" memory: 500Mi requests: cpu: 500m memory: 100Mi ... 2.5.4.6. Performance tuning the VPA Operator As a cluster administrator, you can tune the performance of your Vertical Pod Autoscaler Operator (VPA) to limit the rate at which the VPA makes requests of the Kubernetes API server and to specify the CPU and memory resources for the VPA recommender, updater, and admission controller component pods. Additionally, you can configure the VPA Operator to monitor only those workloads that are being managed by a VPA custom resource (CR). By default, the VPA Operator monitors every workload in the cluster. This allows the VPA Operator to accrue and store 8 days of historical data for all workloads, which the Operator can use if a new VPA CR is created for a workload. However, this causes the VPA Operator to use significant CPU and memory, which could cause the Operator to fail, particularly on larger clusters. By configuring the VPA Operator to monitor only workloads with a VPA CR, you can save on CPU and memory resources. One trade-off is that if you have a workload that has been running, and you create a VPA CR to manage that workload, the VPA Operator does not have any historical data for that workload. As a result, the initial recommendations are not as useful as those after the workload had been running for some time. These tunings allow you to ensure the VPA has sufficient resources to operate at peak efficiency and to prevent throttling and a possible delay in pod admissions. You can perform the following tunings on the VPA components by editing the VerticalPodAutoscalerController custom resource (CR): To prevent throttling and pod admission delays, set the queries-per-second (QPS) and burst rates for VPA requests of the Kubernetes API server by using the kube-api-qps and kube-api-burst parameters. To ensure sufficient CPU and memory, set the CPU and memory requests for VPA component pods by using the standard cpu and memory resource requests. To configure the VPA Operator to monitor only workloads that are being managed by a VPA CR, set the memory-saver parameter to true for the recommender component. For guidelines on the resources and rate limits that you could set for each VPA component, the following tables provide recommended baseline values, depending on the size of your cluster and other factors. Important These recommended values were derived from internal Red Hat testing on clusters that are not necessarily representative of real-world clusters. You should test these values in a non-production cluster before configuring a production cluster. Table 2.2. Requests by containers in the cluster Component 1-500 containers 500-1000 containers 1000-2000 containers 2000-4000 containers 4000+ containers CPU Memory CPU Memory CPU Memory CPU Memory CPU Memory Admission 25m 50Mi 25m 75Mi 40m 150Mi 75m 260Mi (0.03c)/2 + 10 [1] (0.1c)/2 + 50 [1] Recommender 25m 100Mi 50m 160Mi 75m 275Mi 120m 420Mi (0.05c)/2 + 50 [1] (0.15c)/2 + 120 [1] Updater 25m 100Mi 50m 220Mi 80m 350Mi 150m 500Mi (0.07c)/2 + 20 [1] (0.15c)/2 + 200 [1] c is the number of containers in the cluster. Note It is recommended that you set the memory limit on your containers to at least double the recommended requests in the table. However, because CPU is a compressible resource, setting CPU limits for containers can throttle the VPA. As such, it is recommended that you do not set a CPU limit on your containers. Table 2.3. Rate limits by VPAs in the cluster Component 1 - 150 VPAs 151 - 500 VPAs 501-2000 VPAs 2001-4000 VPAs QPS Limit [1] Burst [2] QPS Limit Burst QPS Limit Burst QPS Limit Burst Recommender 5 10 30 60 60 120 120 240 Updater 5 10 30 60 60 120 120 240 QPS specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default for the updater and recommender pods is 5.0 . Burst specifies the burst limit when making requests to Kubernetes API server. The default for the updater and recommender pods is 10.0 . Note If you have more than 4000 VPAs in your cluster, it is recommended that you start performance tuning with the values in the table and slowly increase the values until you achieve the desired recommender and updater latency and performance. You should adjust these values slowly because increased QPS and Burst could affect the cluster health and slow down the Kubernetes API server if too many API requests are being sent to the API server from the VPA components. The following example VPA controller CR is for a cluster with 1000 to 2000 containers and a pod creation surge of 26 to 50. The CR sets the following values: The container memory and CPU requests for all three VPA components The container memory limit for all three VPA components The QPS and burst rates for all three VPA components The memory-saver parameter to true for the VPA recommender component Example VerticalPodAutoscalerController CR apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: 1 container: args: 2 - '--kube-api-qps=50.0' - '--kube-api-burst=100.0' resources: requests: 3 cpu: 40m memory: 150Mi limits: memory: 300Mi recommender: 4 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' - '--memory-saver=true' 5 resources: requests: cpu: 75m memory: 275Mi limits: memory: 550Mi updater: 6 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' resources: requests: cpu: 80m memory: 350M limits: memory: 700Mi minReplicas: 2 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15 1 Specifies the tuning parameters for the VPA admission controller. 2 Specifies the API QPS and burst rates for the VPA admission controller. kube-api-qps : Specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default is 5.0 . kube-api-burst : Specifies the burst limit when making requests to Kubernetes API server. The default is 10.0 . 3 Specifies the resource requests and limits for the VPA admission controller pod. 4 Specifies the tuning parameters for the VPA recommender. 5 Specifies that the VPA Operator monitors only workloads with a VPA CR. The default is false . 6 Specifies the tuning parameters for the VPA updater. You can verify that the settings were applied to each VPA component pod. Example updater pod apiVersion: v1 kind: Pod metadata: name: vpa-updater-default-d65ffb9dc-hgw44 namespace: openshift-vertical-pod-autoscaler # ... spec: containers: - args: - --logtostderr - --v=1 - --min-replicas=2 - --kube-api-qps=60.0 - --kube-api-burst=120.0 # ... resources: requests: cpu: 80m memory: 350M # ... Example admission controller pod apiVersion: v1 kind: Pod metadata: name: vpa-admission-plugin-default-756999448c-l7tsd namespace: openshift-vertical-pod-autoscaler # ... spec: containers: - args: - --logtostderr - --v=1 - --tls-cert-file=/data/tls-certs/tls.crt - --tls-private-key=/data/tls-certs/tls.key - --client-ca-file=/data/tls-ca-certs/service-ca.crt - --webhook-timeout-seconds=10 - --kube-api-qps=50.0 - --kube-api-burst=100.0 # ... resources: requests: cpu: 40m memory: 150Mi # ... Example recommender pod apiVersion: v1 kind: Pod metadata: name: vpa-recommender-default-74c979dbbc-znrd2 namespace: openshift-vertical-pod-autoscaler # ... spec: containers: - args: - --logtostderr - --v=1 - --recommendation-margin-fraction=0.15 - --pod-recommendation-min-cpu-millicores=25 - --pod-recommendation-min-memory-mb=250 - --kube-api-qps=60.0 - --kube-api-burst=120.0 - --memory-saver=true # ... resources: requests: cpu: 75m memory: 275Mi # ... 2.5.4.7. Using an alternative recommender You can use your own recommender to autoscale based on your own algorithms. If you do not specify an alternative recommender, OpenShift Container Platform uses the default recommender, which suggests CPU and memory requests based on historical usage. Because there is no universal recommendation policy that applies to all types of workloads, you might want to create and deploy different recommenders for specific workloads. For example, the default recommender might not accurately predict future resource usage when containers exhibit certain resource behaviors, such as cyclical patterns that alternate between usage spikes and idling as used by monitoring applications, or recurring and repeating patterns used with deep learning applications. Using the default recommender with these usage behaviors might result in significant over-provisioning and Out of Memory (OOM) kills for your applications. Note Instructions for how to create a recommender are beyond the scope of this documentation, Procedure To use an alternative recommender for your pods: Create a service account for the alternative recommender and bind that service account to the required cluster role: apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> 1 Creates a service account for the recommender in the namespace where the recommender is deployed. 2 Binds the recommender service account to the metrics-reader role. Specify the namespace where the recommender is to be deployed. 3 Binds the recommender service account to the vpa-actor role. Specify the namespace where the recommender is to be deployed. 4 Binds the recommender service account to the vpa-target-reader role. Specify the namespace where the recommender is to be deployed. To add the alternative recommender to the cluster, create a Deployment object similar to the following: apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true 1 Creates a container for your alternative recommender. 2 Specifies your recommender image. 3 Associates the service account that you created for the recommender. A new pod is created for the alternative recommender in the same namespace. USD oc get pods Example output NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s Configure a VPA CR that includes the name of the alternative recommender Deployment object. Example VPA CR to include the alternative recommender apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: "apps/v1" kind: Deployment 2 name: frontend 1 Specifies the name of the alternative recommender deployment. 2 Specifies the name of an existing workload object you want this VPA to manage. 2.5.5. Using the Vertical Pod Autoscaler Operator You can use the Vertical Pod Autoscaler Operator (VPA) by creating a VPA custom resource (CR). The CR indicates which pods it should analyze and determines the actions the VPA should take with those pods. You can use the VPA to scale built-in resources such as deployments or stateful sets, and custom resources that manage pods. For more information on using the VPA with custom resources, see "Using the Vertical Pod Autoscaler Operator with Custom Resources." Prerequisites The workload object that you want to autoscale must exist. If you want to use an alternative recommender, a deployment including that recommender must exist. Procedure To create a VPA CR for a specific workload object: Change to the project where the workload object you want to scale is located. Create a VPA CR YAML file: apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: "Off" recommenders: 5 - name: my-recommender 1 Specify the type of workload object you want this VPA to manage: Deployment , StatefulSet , Job , DaemonSet , ReplicaSet , or ReplicationController . 2 Specify the name of an existing workload object you want this VPA to manage. 3 Specify the VPA mode: auto to automatically apply the recommended resources on pods associated with the controller. The VPA terminates existing pods and creates new pods with the recommended resource limits and requests. recreate to automatically apply the recommended resources on pods associated with the workload object. The VPA terminates existing pods and creates new pods with the recommended resource limits and requests. The recreate mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. initial to automatically apply the recommended resources when pods associated with the workload object are created. The VPA does not update the pods as it learns new resource recommendations. off to only generate resource recommendations for the pods associated with the workload object. The VPA does not update the pods as it learns new resource recommendations and does not apply the recommendations to new pods. 4 Optional. Specify the containers you want to opt-out and set the mode to Off . 5 Optional. Specify an alternative recommender. Create the VPA CR: USD oc create -f <file-name>.yaml After a few moments, the VPA learns the resource usage of the containers in the pods associated with the workload object. You can view the VPA recommendations using the following command: USD oc get vpa <vpa-name> --output yaml The output shows the recommendations for CPU and memory requests, similar to the following: Example output ... status: ... recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: "274357142" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: "498558823" ... 1 lowerBound is the minimum recommended resource levels. 2 target is the recommended resource levels. 3 upperBound is the highest recommended resource levels. 4 uncappedTarget is the most recent resource recommendations. 2.5.5.1. Example custom resources for the Vertical Pod Autoscaler The Vertical Pod Autoscaler Operator (VPA) can update not only built-in resources such as deployments or stateful sets, but also custom resources that manage pods. In order to use the VPA with a custom resource, when you create the CustomResourceDefinition (CRD) object, you must configure the labelSelectorPath field in the /scale subresource. The /scale subresource creates a Scale object. The labelSelectorPath field defines the JSON path inside the custom resource that corresponds to Status.Selector in the Scale object and in the custom resource. The following is an example of a CustomResourceDefinition and a CustomResource that fulfills these requirements, along with a VerticalPodAutoscaler definition that targets the custom resource. The following example shows the /scale subresource contract. Note This example does not result in the VPA scaling pods because there is no controller for the custom resource that allows it to own any pods. As such, you must write a controller in a language supported by Kubernetes to manage the reconciliation and state management between the custom resource and your pods. The example illustrates the configuration for the VPA to understand the custom resource as scalable. Example custom CRD, CR apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: scalablepods.testing.openshift.io spec: group: testing.openshift.io versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: replicas: type: integer minimum: 0 selector: type: string status: type: object properties: replicas: type: integer subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .spec.selector 1 scope: Namespaced names: plural: scalablepods singular: scalablepod kind: ScalablePod shortNames: - spod 1 Specifies the JSON path that corresponds to status.selector field of the custom resource object. Example custom CR apiVersion: testing.openshift.io/v1 kind: ScalablePod metadata: name: scalable-cr namespace: default spec: selector: "app=scalable-cr" 1 replicas: 1 1 Specify the label type to apply to managed pods. This is the field referenced by the labelSelectorPath in the custom resource definition object. Example VPA object apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: scalable-cr namespace: default spec: targetRef: apiVersion: testing.openshift.io/v1 kind: ScalablePod name: scalable-cr updatePolicy: updateMode: "Auto" 2.5.6. Uninstalling the Vertical Pod Autoscaler Operator You can remove the Vertical Pod Autoscaler Operator (VPA) from your OpenShift Container Platform cluster. After uninstalling, the resource requests for the pods already modified by an existing VPA CR do not change. Any new pods get the resources defined in the workload object, not the recommendations made by the Vertical Pod Autoscaler Operator. Note You can remove a specific VPA CR by using the oc delete vpa <vpa-name> command. The same actions apply for resource requests as uninstalling the vertical pod autoscaler. After removing the VPA Operator, it is recommended that you remove the other components associated with the Operator to avoid potential issues. Prerequisites The Vertical Pod Autoscaler Operator must be installed. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Switch to the openshift-vertical-pod-autoscaler project. For the VerticalPodAutoscaler Operator, click the Options menu and select Uninstall Operator . Optional: To remove all operands associated with the Operator, in the dialog box, select Delete all operand instances for this operator checkbox. Click Uninstall . Optional: Use the OpenShift CLI to remove the VPA components: Delete the VPA namespace: USD oc delete namespace openshift-vertical-pod-autoscaler Delete the VPA custom resource definition (CRD) objects: USD oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io USD oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io USD oc delete crd verticalpodautoscalers.autoscaling.k8s.io Deleting the CRDs removes the associated roles, cluster roles, and role bindings. Note This action removes from the cluster all user-created VPA CRs. If you re-install the VPA, you must create these objects again. Delete the MutatingWebhookConfiguration object by running the following command: USD oc delete MutatingWebhookConfiguration vpa-webhook-config Delete the VPA Operator: USD oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler 2.6. Providing sensitive data to pods by using secrets Some applications need sensitive information, such as passwords and user names, that you do not want developers to have. As an administrator, you can use Secret objects to provide this information without exposing that information in clear text. 2.6.1. Understanding secrets The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. Key properties include: Secret data can be referenced independently from its definition. Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node. Secret data can be shared within a namespace. YAML Secret object definition apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5 1 Indicates the structure of the secret's key names and values. 2 The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary . 3 The value associated with keys in the data map must be base64 encoded. 4 Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field. 5 The value associated with keys in the stringData map is made up of plain text strings. You must create a secret before creating the pods that depend on that secret. When creating secrets: Create a secret object with secret data. Update the pod's service account to allow the reference to the secret. Create a pod, which consumes the secret as an environment variable or as a file (using a secret volume). 2.6.1.1. Types of secrets The value in the type field indicates the structure of the secret's key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default. Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data: kubernetes.io/basic-auth : Use with Basic authentication kubernetes.io/dockercfg : Use as an image pull secret kubernetes.io/dockerconfigjson : Use as an image pull secret kubernetes.io/service-account-token : Use to obtain a legacy service account API token kubernetes.io/ssh-auth : Use with SSH key authentication kubernetes.io/tls : Use with TLS certificate authorities Specify type: Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values. Note You can specify other arbitrary types, such as example.com/my-secret-type . These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type. For examples of creating different types of secrets, see Understanding how to create secrets . 2.6.1.2. Secret data keys Secret keys must be in a DNS subdomain. 2.6.1.3. Automatically generated image pull secrets By default, OpenShift Container Platform creates an image pull secret for each service account. Note Prior to OpenShift Container Platform 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with OpenShift Container Platform 4.16, this service account API token secret is no longer created. After upgrading to 4.17, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . This image pull secret is necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, an image pull secret is not generated for each service account. When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically. 2.6.2. Understanding how to create secrets As an administrator you must create a secret before developers can create the pods that depend on that secret. When creating secrets: Create a secret object that contains the data you want to keep secret. The specific data required for each secret type is descibed in the following sections. Example YAML object that creates an opaque secret apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB 1 Specifies the type of secret. 2 Specifies encoded string and data. 3 Specifies decoded string and data. Use either the data or stringdata fields, not both. Update the pod's service account to reference the secret: YAML of a service account that uses a secret apiVersion: v1 kind: ServiceAccount ... secrets: - name: test-secret Create a pod, which consumes the secret as an environment variable or as a file (using a secret volume): YAML of a pod populating files in a volume with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never 1 Add a volumeMounts field to each container that needs the secret. 2 Specifies an unused directory name where you would like the secret to appear. Each key in the secret data map becomes the filename under mountPath . 3 Set to true . If true, this instructs the driver to provide a read-only volume. 4 Specifies the name of the secret. YAML of a pod populating environment variables with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Specifies the environment variable that consumes the secret key. YAML of a build config populating environment variables with secret data apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest' 1 Specifies the environment variable that consumes the secret key. 2.6.2.1. Secret creation restrictions To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways: To populate environment variables for containers. As files in a volume mounted on one or more of its containers. By kubelet when pulling images for the pod. Volume type secrets write data into the container as a file using the volume mechanism. Image pull secrets use service accounts for the automatic injection of the secret into all pods in a namespace. When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to a Secret object. Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account. Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that could exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory. 2.6.2.2. Creating an opaque secret As an administrator, you can create an opaque secret, which allows you to store unstructured key:value pairs that can contain arbitrary values. Procedure Create a Secret object in a YAML file. For example: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password> 1 Specifies an opaque secret. Use the following command to create a Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.6.2.3. Creating a legacy service account token secret As an administrator, you can create a legacy service account token secret, which allows you to distribute a service account token to applications that must authenticate to the API. Warning It is recommended to obtain bound service account tokens using the TokenRequest API instead of using legacy service account token secrets. You should create a service account token secret only if you cannot use the TokenRequest API and if the security exposure of a nonexpiring token in a readable API object is acceptable to you. Bound service account tokens are more secure than service account token secrets for the following reasons: Bound service account tokens have a bounded lifetime. Bound service account tokens contain audiences. Bound service account tokens can be bound to pods or secrets and the bound tokens are invalidated when the bound object is removed. Workloads are automatically injected with a projected volume to obtain a bound service account token. If your workload needs an additional service account token, add an additional projected volume in your workload manifest. For more information, see "Configuring bound service account tokens using volume projection". Procedure Create a Secret object in a YAML file: Example Secret object apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: "sa-name" 1 type: kubernetes.io/service-account-token 2 1 Specifies an existing service account name. If you are creating both the ServiceAccount and the Secret objects, create the ServiceAccount object first. 2 Specifies a service account token secret. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets Configuring bound service account tokens using volume projection Understanding and creating service accounts 2.6.2.4. Creating a basic authentication secret As an administrator, you can create a basic authentication secret, which allows you to store the credentials needed for basic authentication. When using this secret type, the data parameter of the Secret object must contain the following keys encoded in the base64 format: username : the user name for authentication password : the password or token for authentication Note You can use the stringData parameter to use clear text content. Procedure Create a Secret object in a YAML file: Example secret object apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password> 1 Specifies a basic authentication secret. 2 Specifies the basic authentication values to use. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.6.2.5. Creating an SSH authentication secret As an administrator, you can create an SSH authentication secret, which allows you to store data used for SSH authentication. When using this secret type, the data parameter of the Secret object must contain the SSH credential to use. Procedure Create a Secret object in a YAML file on a control plane node: Example secret object apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y ... 1 Specifies an SSH authentication secret. 2 Specifies the SSH key/value pair as the SSH credentials to use. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.6.2.6. Creating a Docker configuration secret As an administrator, you can create a Docker configuration secret, which allows you to store the credentials for accessing a container image registry. kubernetes.io/dockercfg . Use this secret type to store your local Docker configuration file. The data parameter of the secret object must contain the contents of a .dockercfg file encoded in the base64 format. kubernetes.io/dockerconfigjson . Use this secret type to store your local Docker configuration JSON file. The data parameter of the secret object must contain the contents of a .docker/config.json file encoded in the base64 format. Procedure Create a Secret object in a YAML file. Example Docker configuration secret object apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a Docker configuration file. 2 The output of a base64-encoded Docker configuration file Example Docker configuration JSON secret object apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a Docker configuration JSONfile. 2 The output of a base64-encoded Docker configuration JSON file Use the following command to create the Secret object USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.6.2.7. Creating a secret using the web console You can create secrets using the web console. Procedure Navigate to Workloads Secrets . Click Create From YAML . Edit the YAML manually to your specifications, or drag and drop a file into the YAML editor. For example: apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com 1 This example specifies an opaque secret; however, you may see other secret types such as service account token secret, basic authentication secret, SSH authentication secret, or a secret that uses Docker configuration. 2 Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field. Click Create . Click Add Secret to workload . From the drop-down menu, select the workload to add. Click Save . 2.6.3. Understanding how to update secrets When you modify the value of a secret, the value (used by an already running pod) will not dynamically change. To change a secret, you must delete the original pod and create a new pod (perhaps with an identical PodSpec). Updating a secret follows the same workflow as deploying a new Container image. You can use the kubectl rolling-update command. The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, the version of the secret that is used for the pod is not defined. Note Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods will report this information, so that a controller could restart ones using an old resourceVersion . In the interim, do not update the data of existing secrets, but create new ones with distinct names. 2.6.4. Creating and using secrets As an administrator, you can create a service account token secret. This allows you to distribute a service account token to applications that must authenticate to the API. Procedure Create a service account in your namespace by running the following command: USD oc create sa <service_account_name> -n <your_namespace> Save the following YAML example to a file named service-account-token-secret.yaml . The example includes a Secret object configuration that you can use to generate a service account token: apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: "sa-name" 2 type: kubernetes.io/service-account-token 3 1 Replace <secret_name> with the name of your service token secret. 2 Specifies an existing service account name. If you are creating both the ServiceAccount and the Secret objects, create the ServiceAccount object first. 3 Specifies a service account token secret type. Generate the service account token by applying the file: USD oc apply -f service-account-token-secret.yaml Get the service account token from the secret by running the following command: USD oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1 Example output ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA 1 Replace <sa_token_secret> with the name of your service token secret. Use your service account token to authenticate with the API of your cluster: USD curl -X GET <openshift_cluster_api> --header "Authorization: Bearer <token>" 1 2 1 Replace <openshift_cluster_api> with the OpenShift cluster API. 2 Replace <token> with the service account token that is output in the preceding command. 2.6.5. About using signed certificates with secrets To secure communication to your service, you can configure OpenShift Container Platform to generate a signed serving certificate/key pair that you can add into a secret in a project. A service serving certificate secret is intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Service Pod spec configured for a service serving certificates secret. apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1 # ... 1 Specify the name for the certificate Other pods can trust cluster-created certificates (which are only signed for internal DNS names), by using the CA bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is x509.SHA256WithRSA . To manually rotate, delete the generated secret. A new certificate is created. 2.6.5.1. Generating signed certificates for use with secrets To use a signed serving certificate/key pair with a pod, create or edit the service to add the service.beta.openshift.io/serving-cert-secret-name annotation, then add the secret to the pod. Procedure To create a service serving certificate secret : Edit the Pod spec for your service. Add the service.beta.openshift.io/serving-cert-secret-name annotation with the name you want to use for your secret. kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. Create the service: USD oc create -f <file-name>.yaml View the secret to make sure it was created: View a list of all secrets: USD oc get secrets Example output NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m View details on your secret: USD oc describe secret my-cert Example output Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes Edit your Pod spec with that secret. apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: "/etc/my-path" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511 When it is available, your pod will run. The certificate will be good for the internal service DNS name, <service.name>.<service.namespace>.svc . The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format. Note In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes. 2.6.6. Troubleshooting secrets If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service service.beta.openshift.io/serving-cert-generation-error , service.beta.openshift.io/serving-cert-generation-error-num : Delete the secret: USD oc delete secret <secret_name> Clear the annotations: USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing annotation has a - after the annotation name to be removed. 2.7. Providing sensitive data to pods by using an external secrets store Some applications need sensitive information, such as passwords and user names, that you do not want developers to have. As an alternative to using Kubernetes Secret objects to provide sensitive information, you can use an external secrets store to store the sensitive information. You can use the Secrets Store CSI Driver Operator to integrate with an external secrets store and mount the secret content as a pod volume. Important The Secrets Store CSI Driver Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.7.1. About the Secrets Store CSI Driver Operator Kubernetes secrets are stored with Base64 encoding. etcd provides encryption at rest for these secrets, but when secrets are retrieved, they are decrypted and presented to the user. If role-based access control is not configured properly on your cluster, anyone with API or etcd access can retrieve or modify a secret. Additionally, anyone who is authorized to create a pod in a namespace can use that access to read any secret in that namespace. To store and manage your secrets securely, you can configure the OpenShift Container Platform Secrets Store Container Storage Interface (CSI) Driver Operator to mount secrets from an external secret management system, such as Azure Key Vault, by using a provider plugin. Applications can then use the secret, but the secret does not persist on the system after the application pod is destroyed. The Secrets Store CSI Driver Operator, secrets-store.csi.k8s.io , enables OpenShift Container Platform to mount multiple secrets, keys, and certificates stored in enterprise-grade external secrets stores into pods as a volume. The Secrets Store CSI Driver Operator communicates with the provider using gRPC to fetch the mount contents from the specified external secrets store. After the volume is attached, the data in it is mounted into the container's file system. Secrets store volumes are mounted in-line. 2.7.1.1. Secrets store providers The following secrets store providers are available for use with the Secrets Store CSI Driver Operator: AWS Secrets Manager AWS Systems Manager Parameter Store Azure Key Vault Google Secret Manager HashiCorp Vault 2.7.1.2. Automatic rotation The Secrets Store CSI driver periodically rotates the content in the mounted volume with the content from the external secrets store. If a secret is updated in the external secrets store, the secret will be updated in the mounted volume. The Secrets Store CSI Driver Operator polls for updates every 2 minutes. If you enabled synchronization of mounted content as Kubernetes secrets, the Kubernetes secrets are also rotated. Applications consuming the secret data must watch for updates to the secrets. 2.7.2. Installing the Secrets Store CSI driver Prerequisites Access to the OpenShift Container Platform web console. Administrator access to the cluster. Procedure To install the Secrets Store CSI driver: Install the Secrets Store CSI Driver Operator: Log in to the web console. Click Operators OperatorHub . Locate the Secrets Store CSI Driver Operator by typing "Secrets Store CSI" in the filter box. Click the Secrets Store CSI Driver Operator button. On the Secrets Store CSI Driver Operator page, click Install . On the Install Operator page, ensure that: All namespaces on the cluster (default) is selected. Installed Namespace is set to openshift-cluster-csi-drivers . Click Install . After the installation finishes, the Secrets Store CSI Driver Operator is listed in the Installed Operators section of the web console. Create the ClusterCSIDriver instance for the driver ( secrets-store.csi.k8s.io ): Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, click Create ClusterCSIDriver . Use the following YAML file: apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed Click Create . 2.7.3. Mounting secrets from an external secrets store to a CSI volume After installing the Secrets Store CSI Driver Operator, you can mount secrets from one of the following external secrets stores to a CSI volume: AWS Secrets Manager AWS Systems Manager Parameter Store Azure Key Vault Google Secret Manager HashiCorp Vault 2.7.3.1. Mounting secrets from AWS Secrets Manager You can use the Secrets Store CSI Driver Operator to mount secrets from AWS Secrets Manager to a Container Storage Interface (CSI) volume in OpenShift Container Platform. To mount secrets from AWS Secrets Manager, your cluster must be installed on AWS and use AWS Security Token Service (STS). Prerequisites Your cluster is installed on AWS and uses AWS Security Token Service (STS). You installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You configured AWS Secrets Manager to store the required secrets. You extracted and prepared the ccoctl binary. You installed the jq CLI tool. You have access to the cluster as a user with the cluster-admin role. Procedure Install the AWS Secrets Manager provider: Create a YAML file with the following configuration for the provider resources: Important The AWS Secrets Manager provider for the Secrets Store CSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream AWS documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example aws-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: "/etc/kubernetes/secrets-store-csi-providers" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-aws service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f aws-provider.yaml Grant permission to allow the service account to read the AWS secret object: Create a directory to contain the credentials request by running the following command: USD mkdir credentialsrequest-dir-aws Create a YAML file with the following configuration for the credentials request: Example credentialsrequest.yaml file apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "secretsmanager:GetSecretValue" - "secretsmanager:DescribeSecret" effect: Allow resource: "arn:*:secretsmanager:*:*:secret:testSecret-??????" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider Retrieve the OIDC provider by running the following command: USD oc get --raw=/.well-known/openid-configuration | jq -r '.issuer' Example output https://<oidc_provider_name> Copy the OIDC provider name <oidc_provider_name> from the output to use in the step. Use the ccoctl tool to process the credentials request by running the following command: USD ccoctl aws create-iam-roles \ --name my-role --region=<aws_region> \ --credentials-requests-dir=credentialsrequest-dir-aws \ --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output Example output 2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds . Bind the service account with the role ARN by running the following command: USD oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn="<aws_role_arn>" Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-aws.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: "testSecret" objectType: "secretsmanager" 1 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as aws . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-aws.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-aws-provider" 3 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from AWS Secrets Manager in the pod volume mount: List the secrets in the pod mount by running the following command: USD oc exec my-aws-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testSecret View a secret in the pod mount by running the following command: USD oc exec my-aws-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret Example output <secret_value> Additional resources Configuring the Cloud Credential Operator utility 2.7.3.2. Mounting secrets from AWS Systems Manager Parameter Store You can use the Secrets Store CSI Driver Operator to mount secrets from AWS Systems Manager Parameter Store to a Container Storage Interface (CSI) volume in OpenShift Container Platform. To mount secrets from AWS Systems Manager Parameter Store, your cluster must be installed on AWS and use AWS Security Token Service (STS). Prerequisites Your cluster is installed on AWS and uses AWS Security Token Service (STS). You installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You configured AWS Systems Manager Parameter Store to store the required secrets. You extracted and prepared the ccoctl binary. You installed the jq CLI tool. You have access to the cluster as a user with the cluster-admin role. Procedure Install the AWS Systems Manager Parameter Store provider: Create a YAML file with the following configuration for the provider resources: Important The AWS Systems Manager Parameter Store provider for the Secrets Store CSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream AWS documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example aws-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: "/etc/kubernetes/secrets-store-csi-providers" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-aws service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f aws-provider.yaml Grant permission to allow the service account to read the AWS secret object: Create a directory to contain the credentials request by running the following command: USD mkdir credentialsrequest-dir-aws Create a YAML file with the following configuration for the credentials request: Example credentialsrequest.yaml file apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "ssm:GetParameter" - "ssm:GetParameters" effect: Allow resource: "arn:*:ssm:*:*:parameter/testParameter*" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider Retrieve the OIDC provider by running the following command: USD oc get --raw=/.well-known/openid-configuration | jq -r '.issuer' Example output https://<oidc_provider_name> Copy the OIDC provider name <oidc_provider_name> from the output to use in the step. Use the ccoctl tool to process the credentials request by running the following command: USD ccoctl aws create-iam-roles \ --name my-role --region=<aws_region> \ --credentials-requests-dir=credentialsrequest-dir-aws \ --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output Example output 2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds . Bind the service account with the role ARN by running the following command: USD oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn="<aws_role_arn>" Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-aws.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: "testParameter" objectType: "ssmparameter" 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as aws . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-aws.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-aws-provider" 3 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from AWS Systems Manager Parameter Store in the pod volume mount: List the secrets in the pod mount by running the following command: USD oc exec my-aws-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testParameter View a secret in the pod mount by running the following command: USD oc exec my-aws-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret Example output <secret_value> Additional resources Configuring the Cloud Credential Operator utility 2.7.3.3. Mounting secrets from Azure Key Vault You can use the Secrets Store CSI Driver Operator to mount secrets from Azure Key Vault to a Container Storage Interface (CSI) volume in OpenShift Container Platform. To mount secrets from Azure Key Vault, your cluster must be installed on Microsoft Azure. Prerequisites Your cluster is installed on Azure. You installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You configured Azure Key Vault to store the required secrets. You installed the Azure CLI ( az ). You have access to the cluster as a user with the cluster-admin role. Procedure Install the Azure Key Vault provider: Create a YAML file with the following configuration for the provider resources: Important The Azure Key Vault provider for the Secrets Store CSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream Azure documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example azure-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-azure-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-azure-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-azure-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-azure labels: app: csi-secrets-store-provider-azure spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-azure template: metadata: labels: app: csi-secrets-store-provider-azure spec: serviceAccountName: csi-secrets-store-provider-azure hostNetwork: true containers: - name: provider-azure-installer image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.4.1 imagePullPolicy: IfNotPresent args: - --endpoint=unix:///provider/azure.sock - --construct-pem-chain=true - --healthz-port=8989 - --healthz-path=/healthz - --healthz-timeout=5s livenessProbe: httpGet: path: /healthz port: 8989 failureThreshold: 3 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 0 capabilities: drop: - ALL volumeMounts: - mountPath: "/provider" name: providervol affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: type operator: NotIn values: - virtual-kubelet volumes: - name: providervol hostPath: path: "/var/run/secrets-store-csi-providers" tolerations: - operator: Exists nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-azure service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-azure -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f azure-provider.yaml Create a service principal to access the key vault: Set the service principal client secret as an environment variable by running the following command: USD SERVICE_PRINCIPAL_CLIENT_SECRET="USD(az ad sp create-for-rbac --name https://USDKEYVAULT_NAME --query 'password' -otsv)" Set the service principal client ID as an environment variable by running the following command: USD SERVICE_PRINCIPAL_CLIENT_ID="USD(az ad sp list --display-name https://USDKEYVAULT_NAME --query '[0].appId' -otsv)" Create a generic secret with the service principal client secret and ID by running the following command: USD oc create secret generic secrets-store-creds -n my-namespace --from-literal clientid=USD{SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=USD{SERVICE_PRINCIPAL_CLIENT_SECRET} Apply the secrets-store.csi.k8s.io/used=true label to allow the provider to find this nodePublishSecretRef secret: USD oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-azure.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider 1 namespace: my-namespace 2 spec: provider: azure 3 parameters: 4 usePodIdentity: "false" useVMManagedIdentity: "false" userAssignedIdentityID: "" keyvaultName: "kvname" objects: | array: - | objectName: secret1 objectType: secret tenantId: "tid" 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as azure . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-azure.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-azure-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-azure-provider" 3 nodePublishSecretRef: name: secrets-store-creds 4 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. 4 Specify the name of the Kubernetes secret that contains the service principal credentials to access Azure Key Vault. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from Azure Key Vault in the pod volume mount: List the secrets in the pod mount by running the following command: USD oc exec my-azure-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output secret1 View a secret in the pod mount by running the following command: USD oc exec my-azure-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/secret1 Example output my-secret-value 2.7.3.4. Mounting secrets from Google Secret Manager You can use the Secrets Store CSI Driver Operator to mount secrets from Google Secret Manager to a Container Storage Interface (CSI) volume in OpenShift Container Platform. To mount secrets from Google Secret Manager, your cluster must be installed on Google Cloud Platform (GCP). Prerequisites You installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You configured Google Secret Manager to store the required secrets. You created a service account key named key.json from your Google Cloud service account. You have access to the cluster as a user with the cluster-admin role. Procedure Install the Google Secret Manager provider: Create a YAML file with the following configuration for the provider resources: Example gcp-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-gcp-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-gcp-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-gcp-role rules: - apiGroups: - "" resources: - serviceaccounts/token verbs: - create - apiGroups: - "" resources: - serviceaccounts verbs: - get --- apiVersion: apps/v1 kind: DaemonSet metadata: name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers labels: app: csi-secrets-store-provider-gcp spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-gcp template: metadata: labels: app: csi-secrets-store-provider-gcp spec: serviceAccountName: csi-secrets-store-provider-gcp initContainers: - name: chown-provider-mount image: busybox command: - chown - "1000:1000" - /etc/kubernetes/secrets-store-csi-providers volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol securityContext: privileged: true hostNetwork: false hostPID: false hostIPC: false containers: - name: provider image: us-docker.pkg.dev/secretmanager-csi/secrets-store-csi-driver-provider-gcp/plugin@sha256:a493a78bbb4ebce5f5de15acdccc6f4d19486eae9aa4fa529bb60ac112dd6650 securityContext: privileged: true imagePullPolicy: IfNotPresent resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi env: - name: TARGET_DIR value: "/etc/kubernetes/secrets-store-csi-providers" volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol mountPropagation: None readOnly: false livenessProbe: failureThreshold: 3 httpGet: path: /live port: 8095 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 volumes: - name: providervol hostPath: path: /etc/kubernetes/secrets-store-csi-providers tolerations: - key: kubernetes.io/arch operator: Equal value: amd64 effect: NoSchedule nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-gcp service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-gcp -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f gcp-provider.yaml Grant permission to read the Google Secret Manager secret: Create a new project by running the following command: USD oc new-project my-namespace Label the my-namespace namespace for pod security admission by running the following command: USD oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite Create a service account for the pod deployment: USD oc create serviceaccount my-service-account --namespace=my-namespace Create a generic secret from the key.json file by running the following command: USD oc create secret generic secrets-store-creds -n my-namespace --from-file=key.json 1 1 You created this key.json file from the Google Secret Manager. Apply the secrets-store.csi.k8s.io/used=true label to allow the provider to find this nodePublishSecretRef secret: USD oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-gcp.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-gcp-provider 1 namespace: my-namespace 2 spec: provider: gcp 3 parameters: 4 secrets: | - resourceName: "projects/my-project/secrets/testsecret1/versions/1" path: "testsecret1.txt" 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as gcp . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-gcp.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-gcp-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: my-service-account 3 containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-gcp-provider" 4 nodePublishSecretRef: name: secrets-store-creds 5 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the service account you created. 4 Specify the name of the secret provider class. 5 Specify the name of the Kubernetes secret that contains the service principal credentials to access Google Secret Manager. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from Google Secret Manager in the pod volume mount: List the secrets in the pod mount by running the following command: USD oc exec my-gcp-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testsecret1 View a secret in the pod mount by running the following command: USD oc exec my-gcp-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testsecret1 Example output <secret_value> 2.7.3.5. Mounting secrets from HashiCorp Vault You can use the Secrets Store CSI Driver Operator to mount secrets from HashiCorp Vault to a Container Storage Interface (CSI) volume in OpenShift Container Platform. Important Mounting secrets from HashiCorp Vault by using the Secrets Store CSI Driver Operator has been tested with the following cloud providers: Amazon Web Services (AWS) Microsoft Azure Other cloud providers might work, but have not been tested yet. Additional cloud providers might be tested in the future. Prerequisites You installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You installed Helm. You have access to the cluster as a user with the cluster-admin role. Procedure Add the HashiCorp Helm repository by running the following command: USD helm repo add hashicorp https://helm.releases.hashicorp.com Update all repositories to ensure that Helm is aware of the latest versions by running the following command: USD helm repo update Install the HashiCorp Vault provider: Create a new project for Vault by running the following command: USD oc new-project vault Label the vault namespace for pod security admission by running the following command: USD oc label ns vault security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite Grant privileged access to the vault service account by running the following command: USD oc adm policy add-scc-to-user privileged -z vault -n vault Grant privileged access to the vault-csi-provider service account by running the following command: USD oc adm policy add-scc-to-user privileged -z vault-csi-provider -n vault Deploy HashiCorp Vault by running the following command: USD helm install vault hashicorp/vault --namespace=vault \ --set "server.dev.enabled=true" \ --set "injector.enabled=false" \ --set "csi.enabled=true" \ --set "global.openshift=true" \ --set "injector.agentImage.repository=docker.io/hashicorp/vault" \ --set "server.image.repository=docker.io/hashicorp/vault" \ --set "csi.image.repository=docker.io/hashicorp/vault-csi-provider" \ --set "csi.agent.image.repository=docker.io/hashicorp/vault" \ --set "csi.daemonSet.providersDir=/var/run/secrets-store-csi-providers" Patch the vault-csi-driver daemon set to set the securityContext to privileged by running the following command: USD oc patch daemonset -n vault vault-csi-provider --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/securityContext", "value": {"privileged": true} }]' Verify that the vault-csi-provider pods have started properly by running the following command: USD oc get pods -n vault Example output NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 24m vault-csi-provider-87rgw 1/2 Running 0 5s vault-csi-provider-bd6hp 1/2 Running 0 4s vault-csi-provider-smlv7 1/2 Running 0 5s Configure HashiCorp Vault to store the required secrets: Create a secret by running the following command: USD oc exec vault-0 --namespace=vault -- vault kv put secret/example1 testSecret1=my-secret-value Verify that the secret is readable at the path secret/example1 by running the following command: USD oc exec vault-0 --namespace=vault -- vault kv get secret/example1 Example output = Secret Path = secret/data/example1 ======= Metadata ======= Key Value --- ----- created_time 2024-04-05T07:05:16.713911211Z custom_metadata <nil> deletion_time n/a destroyed false version 1 === Data === Key Value --- ----- testSecret1 my-secret-value Configure Vault to use Kubernetes authentication: Enable the Kubernetes auth method by running the following command: USD oc exec vault-0 --namespace=vault -- vault auth enable kubernetes Example output Success! Enabled kubernetes auth method at: kubernetes/ Configure the Kubernetes auth method: Set the token reviewer as an environment variable by running the following command: USD TOKEN_REVIEWER_JWT="USD(oc exec vault-0 --namespace=vault -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)" Set the Kubernetes service IP address as an environment variable by running the following command: USD KUBERNETES_SERVICE_IP="USD(oc get svc kubernetes --namespace=default -o go-template="{{ .spec.clusterIP }}")" Update the Kubernetes auth method by running the following command: USD oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/config \ issuer="https://kubernetes.default.svc.cluster.local" \ token_reviewer_jwt="USD{TOKEN_REVIEWER_JWT}" \ kubernetes_host="https://USD{KUBERNETES_SERVICE_IP}:443" \ kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt Example output Success! Data written to: auth/kubernetes/config Create a policy for the application by running the following command: USD oc exec -i vault-0 --namespace=vault -- vault policy write csi -<<EOF path "secret/data/*" { capabilities = ["read"] } EOF Example output Success! Uploaded policy: csi Create an authentication role to access the application by running the following command: USD oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/role/csi \ bound_service_account_names=default \ bound_service_account_namespaces=default,test-ns,negative-test-ns,my-namespace \ policies=csi \ ttl=20m Example output Success! Data written to: auth/kubernetes/role/csi Verify that all of the vault pods are running properly by running the following command: USD oc get pods -n vault Example output NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 43m vault-csi-provider-87rgw 2/2 Running 0 19m vault-csi-provider-bd6hp 2/2 Running 0 19m vault-csi-provider-smlv7 2/2 Running 0 19m Verify that all of the secrets-store-csi-driver pods are running properly by running the following command: USD oc get pods -n openshift-cluster-csi-drivers | grep -E "secrets" Example output secrets-store-csi-driver-node-46d2g 3/3 Running 0 45m secrets-store-csi-driver-node-d2jjn 3/3 Running 0 45m secrets-store-csi-driver-node-drmt4 3/3 Running 0 45m secrets-store-csi-driver-node-j2wlt 3/3 Running 0 45m secrets-store-csi-driver-node-v9xv4 3/3 Running 0 45m secrets-store-csi-driver-node-vlz28 3/3 Running 0 45m secrets-store-csi-driver-operator-84bd699478-fpxrw 1/1 Running 0 47m Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-vault.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-vault-provider 1 namespace: my-namespace 2 spec: provider: vault 3 parameters: 4 roleName: "csi" vaultAddress: "http://vault.vault:8200" objects: | - secretPath: "secret/data/example1" objectName: "testSecret1" secretKey: "testSecret1 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as vault . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-vault.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: busybox-deployment 1 namespace: my-namespace 2 labels: app: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: terminationGracePeriodSeconds: 0 containers: - image: registry.k8s.io/e2e-test-images/busybox:1.29-4 name: busybox imagePullPolicy: IfNotPresent command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-vault-provider" 3 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from your HashiCorp Vault in the pod volume mount: List the secrets in the pod mount by running the following command: USD oc exec busybox-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testSecret1 View a secret in the pod mount by running the following command: USD oc exec busybox-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret1 Example output my-secret-value 2.7.4. Enabling synchronization of mounted content as Kubernetes secrets You can enable synchronization to create Kubernetes secrets from the content on a mounted volume. An example where you might want to enable synchronization is to use an environment variable in your deployment to reference the Kubernetes secret. Warning Do not enable synchronization if you do not want to store your secrets on your OpenShift Container Platform cluster and in etcd. Enable this functionality only if you require it, such as when you want to use environment variables to refer to the secret. If you enable synchronization, the secrets from the mounted volume are synchronized as Kubernetes secrets after you start a pod that mounts the secrets. The synchronized Kubernetes secret is deleted when all pods that mounted the content are deleted. Prerequisites You have installed the Secrets Store CSI Driver Operator. You have installed a secrets store provider. You have created the secret provider class. You have access to the cluster as a user with the cluster-admin role. Procedure Edit the SecretProviderClass resource by running the following command: USD oc edit secretproviderclass my-azure-provider 1 1 Replace my-azure-provider with the name of your secret provider class. Add the secretsObjects section with the configuration for the synchronized Kubernetes secrets: apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider namespace: my-namespace spec: provider: azure secretObjects: 1 - secretName: tlssecret 2 type: kubernetes.io/tls 3 labels: environment: "test" data: - objectName: tlskey 4 key: tls.key 5 - objectName: tlscrt key: tls.crt parameters: usePodIdentity: "false" keyvaultName: "kvname" objects: | array: - | objectName: tlskey objectType: secret - | objectName: tlscrt objectType: secret tenantId: "tid" 1 Specify the configuration for synchronized Kubernetes secrets. 2 Specify the name of the Kubernetes Secret object to create. 3 Specify the type of Kubernetes Secret object to create. For example, Opaque or kubernetes.io/tls . 4 Specify the object name or alias of the mounted content to synchronize. 5 Specify the data field from the specified objectName to populate the Kubernetes secret with. Save the file to apply the changes. 2.7.5. Viewing the status of secrets in the pod volume mount You can view detailed information, including the versions, of the secrets in the pod volume mount. The Secrets Store CSI Driver Operator creates a SecretProviderClassPodStatus resource in the same namespace as the pod. You can review this resource to see detailed information, including versions, about the secrets in the pod volume mount. Prerequisites You have installed the Secrets Store CSI Driver Operator. You have installed a secrets store provider. You have created the secret provider class. You have deployed a pod that mounts a volume from the Secrets Store CSI Driver Operator. You have access to the cluster as a user with the cluster-admin role. Procedure View detailed information about the secrets in a pod volume mount by running the following command: USD oc get secretproviderclasspodstatus <secret_provider_class_pod_status_name> -o yaml 1 1 The name of the secret provider class pod status object is in the format of <pod_name>-<namespace>-<secret_provider_class_name> . Example output ... status: mounted: true objects: - id: secret/tlscrt version: f352293b97da4fa18d96a9528534cb33 - id: secret/tlskey version: 02534bc3d5df481cb138f8b2a13951ef podName: busybox-<hash> secretProviderClassName: my-azure-provider targetPath: /var/lib/kubelet/pods/f0d49c1e-c87a-4beb-888f-37798456a3e7/volumes/kubernetes.io~csi/secrets-store-inline/mount 2.7.6. Uninstalling the Secrets Store CSI Driver Operator Prerequisites Access to the OpenShift Container Platform web console. Administrator access to the cluster. Procedure To uninstall the Secrets Store CSI Driver Operator: Stop all application pods that use the secrets-store.csi.k8s.io provider. Remove any third-party provider plug-in for your chosen secret store. Remove the Container Storage Interface (CSI) driver and associated manifests: Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, for secrets-store.csi.k8s.io , on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver . When prompted, click Delete . Verify that the CSI driver pods are no longer running. Uninstall the Secrets Store CSI Driver Operator: Note Before you can uninstall the Operator, you must remove the CSI driver first. Click Operators Installed Operators . On the Installed Operators page, scroll or type "Secrets Store CSI" into the Search by name box to find the Operator, and then click it. On the upper, right of the Installed Operators > Operator details page, click Actions Uninstall Operator . When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually. After uninstalling, the Secrets Store CSI Driver Operator is no longer listed in the Installed Operators section of the web console. 2.8. Authenticating pods with short-term credentials Some OpenShift Container Platform clusters use short-term security credentials for individual components that are created and managed outside the cluster. Applications in customer workloads on these clusters can authenticate by using the short-term authentication method that the cluster uses. 2.8.1. Configuring short-term authentication for workloads To use this authentication method in your applications, you must complete the following steps: Create a federated identity service account in the Identity and Access Management (IAM) settings for your cloud provider. Create an OpenShift Container Platform service account that can impersonate a service account for your cloud provider. Configure any workloads related to your application to use the OpenShift Container Platform service account. 2.8.1.1. Environment and user access requirements To configure this authentication method, you must meet the following requirements: Your cluster must use short-term security credentials . You must have access to the OpenShift CLI ( oc ) as a user with the cluster-admin role. In your cloud provider console, you must have access as a user with privileges to manage Identity and Access Management (IAM) and federated identity configurations. 2.8.2. Configuring GCP Workload Identity authentication for applications on GCP To use short-term authentication for applications on a GCP clusters that use GCP Workload Identity authentication, you must complete the following steps: Configure access in GCP. Create an OpenShift Container Platform service account that can use this access. Deploy customer workloads that authenticate with GCP Workload Identity. Creating a federated GCP service account You can use the Google Cloud console to create a workload identity pool and provider and allow an OpenShift Container Platform service account to impersonate a GCP service account. Prerequisites Your GCP cluster is running OpenShift Container Platform version 4.17.4 or later and uses GCP Workload Identity. You have access to the Google Cloud console as a user with privileges to manage Identity and Access Management (IAM) and workload identity configurations. You have created a Google Cloud project to use with your application. Procedure In the IAM configuration for your Google Cloud project, identify the identity pool and provider that the cluster uses for GCP Workload Identity authentication. Grant permission for external identities to impersonate a GCP service account. With these permissions, an OpenShift Container Platform service account can work as a federated workload identity. For more information, see GCP documentation about allowing your external workload to access Google Cloud resources . Creating an OpenShift Container Platform service account for GCP You create an OpenShift Container Platform service account and annotate it to impersonate a GCP service account. Prerequisites Your GCP cluster is running OpenShift Container Platform version 4.17.4 or later and uses GCP Workload Identity. You have created a federated GCP service account. You have access to the OpenShift CLI ( oc ) as a user with the cluster-admin role. You have access to the Google Cloud CLI ( gcloud ) as a user with privileges to manage Identity and Access Management (IAM) and workload identity configurations. Procedure Create an OpenShift Container Platform service account to use for GCP Workload Identity pod authentication by running the following command: USD oc create serviceaccount <service_account_name> Annotate the service account with the identity provider and GCP service account to impersonate by running the following command: USD oc patch serviceaccount <service_account_name> -p '{"metadata": {"annotations": {"cloud.google.com/workload-identity-provider": "projects/<project_number>/locations/global/workloadIdentityPools/<identity_pool>/providers/<identity_provider>"}}}' Replace <project_number> , <identity_pool> , and <identity_provider> with the values for your configuration. Note For <project_number> , specify the Google Cloud project number, not the project ID. Annotate the service account with the email address for the GCP service account by running the following command: USD oc patch serviceaccount <service_account_name> -p '{"metadata": {"annotations": {"cloud.google.com/service-account-email": "<service_account_email>"}}}' Replace <service_account_email> with the email address for the GCP service account. Tip GCP service account email addresses typically use the format <service_account_name>@<project_id>.iam.gserviceaccount.com Annotate the service account to use the direct external credentials configuration injection mode by running the following command: USD oc patch serviceaccount <service_account_name> -p '{"metadata": {"annotations": {"cloud.google.com/injection-mode": "direct"}}}' In this mode, the Workload Identity Federation webhook controller directly generates the GCP external credentials configuration and injects them into the pod. Use the Google Cloud CLI ( gcloud ) to specify the permissions for the workload by running the following command: USD gcloud projects add-iam-policy-binding <project_id> --member "<service_account_email>" --role "projects/<project_id>/roles/<role_for_workload_permissions>" Replace <role_for_workload_permissions> with the role for the workload. Specify a role that grants the permissions that your workload requires. Verification To verify the service account configuration, inspect the ServiceAccount manifest by running the following command: USD oc get serviceaccount <service_account_name> In the following example, the service-a/app-x OpenShift Container Platform service account can impersonate a GCP service account called app-x : apiVersion: v1 kind: ServiceAccount metadata: name: app-x namespace: service-a annotations: cloud.google.com/workload-identity-provider: "projects/<project_number>/locations/global/workloadIdentityPools/<identity_pool>/providers/<identity_provider>" 1 cloud.google.com/service-account-email: "[email protected]" cloud.google.com/audience: "sts.googleapis.com" 2 cloud.google.com/token-expiration: "86400" 3 cloud.google.com/gcloud-run-as-user: "1000" cloud.google.com/injection-mode: "direct" 4 1 The workload identity provider for the service account of the cluster. 2 The allowed audience for the workload identity provider. 3 The token expiration time period in seconds. 4 The direct external credentials configuration injection mode. Deploying customer workloads that authenticate with GCP Workload Identity To use short-term authentication in your application, you must configure its related pods to use the OpenShift Container Platform service account. Use of the OpenShift Container Platform service account triggers the webhook to mutate the pods so they can impersonate the GCP service account. The following example demonstrates how to deploy a pod that uses the OpenShift Container Platform service account and verify the configuration. Prerequisites Your GCP cluster is running OpenShift Container Platform version 4.17.4 or later and uses GCP Workload Identity. You have created a federated GCP service account. You have created an OpenShift Container Platform service account for GCP. Procedure To create a pod that authenticates with GCP Workload Identity, create a deployment YAML file similar to the following example: Sample deployment apiVersion: apps/v1 kind: Deployment metadata: name: ubi9 spec: replicas: 1 selector: matchLabels: app: ubi9 template: metadata: labels: app: ubi9 spec: serviceAccountName: "<service_account_name>" 1 containers: - name: ubi image: 'registry.access.redhat.com/ubi9/ubi-micro:latest' command: - /bin/sh - '-c' - | sleep infinity 1 Specify the name of the OpenShift Container Platform service account. Apply the deployment file by running the following command: USD oc apply -f deployment.yaml Verification To verify that a pod is using short-term authentication, run the following command: USD oc get pods -o json | jq -r '.items[0].spec.containers[0].env[] | select(.name=="GOOGLE_APPLICATION_CREDENTIALS")' Example output { "name": "GOOGLE_APPLICATION_CREDENTIALS", "value": "/var/run/secrets/workload-identity/federation.json" } The presence of the GOOGLE_APPLICATION_CREDENTIALS environment variable indicates a pod that authenticates with GCP Workload Identity. To verify additional configuration details, examine the pod specification. The following example pod specifications show the environment variables and volume fields that the webhook mutates. Example pod specification with the direct injection mode: apiVersion: v1 kind: Pod metadata: name: app-x-pod namespace: service-a annotations: cloud.google.com/skip-containers: "init-first,sidecar" cloud.google.com/external-credentials-json: |- 1 { "type": "external_account", "audience": "//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/on-prem-kubernetes/providers/<identity_provider>", "subject_token_type": "urn:ietf:params:oauth:token-type:jwt", "token_url": "https://sts.googleapis.com/v1/token", "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/[email protected]:generateAccessToken", "credential_source": { "file": "/var/run/secrets/sts.googleapis.com/serviceaccount/token", "format": { "type": "text" } } } spec: serviceAccountName: app-x initContainers: - name: init-first image: container-image:version containers: - name: sidecar image: container-image:version - name: container-name image: container-image:version env: 2 - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/run/secrets/gcloud/config/federation.json - name: CLOUDSDK_COMPUTE_REGION value: asia-northeast1 volumeMounts: - name: gcp-iam-token readOnly: true mountPath: /var/run/secrets/sts.googleapis.com/serviceaccount - mountPath: /var/run/secrets/gcloud/config name: external-credential-config readOnly: true volumes: - name: gcp-iam-token projected: sources: - serviceAccountToken: audience: sts.googleapis.com expirationSeconds: 86400 path: token - downwardAPI: defaultMode: 288 items: - fieldRef: apiVersion: v1 fieldPath: metadata.annotations['cloud.google.com/external-credentials-json'] path: federation.json name: external-credential-config 1 The external credentials configuration generated by the webhook controller. The Kubernetes downwardAPI volume mounts the configuration into the container filesystem. 2 The webhook-injected environment variables for token-based authentication. 2.9. Creating and using config maps The following sections define config maps and how to create and use them. 2.9.1. Understanding config maps Many applications require configuration by using some combination of configuration files, command line arguments, and environment variables. In OpenShift Container Platform, these configuration artifacts are decoupled from image content to keep containerized applications portable. The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of OpenShift Container Platform. A config map can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example: ConfigMap Object Definition kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2 1 Contains the configuration data. 2 Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. Enter the file data in Base 64. Note You can use the binaryData field when you create a config map from a binary file, such as an image. Configuration data can be consumed in pods in a variety of ways. A config map can be used to: Populate environment variable values in containers Set command-line arguments in a container Populate configuration files in a volume Users and system components can store configuration data in a config map. A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information. Config map restrictions A config map must be created before its contents can be consumed in pods. Controllers can be written to tolerate missing configuration data. Consult individual components configured by using config maps on a case-by-case basis. ConfigMap objects reside in a project. They can only be referenced by pods in the same project. The Kubelet only supports the use of a config map for pods it gets from the API server. This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the OpenShift Container Platform node's --manifest-url flag, its --config flag, or its REST API because these are not common ways to create pods. 2.9.2. Creating a config map in the OpenShift Container Platform web console You can create a config map in the OpenShift Container Platform web console. Procedure To create a config map as a cluster administrator: In the Administrator perspective, select Workloads Config Maps . At the top right side of the page, select Create Config Map . Enter the contents of your config map. Select Create . To create a config map as a developer: In the Developer perspective, select Config Maps . At the top right side of the page, select Create Config Map . Enter the contents of your config map. Select Create . 2.9.3. Creating a config map by using the CLI You can use the following command to create a config map from directories, specific files, or literal values. Procedure Create a config map: USD oc create configmap <configmap_name> [options] 2.9.3.1. Creating a config map from a directory You can create a config map from a directory by using the --from-file flag. This method allows you to use multiple files within a directory to create a config map. Each file in the directory is used to populate a key in the config map, where the name of the key is the file name, and the value of the key is the content of the file. For example, the following command creates a config map with the contents of the example-files directory: USD oc create configmap game-config --from-file=example-files/ View the keys in the config map: USD oc describe configmaps game-config Example output Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes You can see that the two keys in the map are created from the file names in the directory specified in the command. The content of those keys might be large, so the output of oc describe only shows the names of the keys and their sizes. Prerequisite You must have a directory with files that contain the data you want to populate a config map with. The following procedure uses these example files: game.properties and ui.properties : USD cat example-files/game.properties Example output enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 USD cat example-files/ui.properties Example output color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice Procedure Create a config map holding the content of each file in this directory by entering the following command: USD oc create configmap game-config \ --from-file=example-files/ Verification Enter the oc get command for the object with the -o option to see the values of the keys: USD oc get configmaps game-config -o yaml Example output apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: "407" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985 2.9.3.2. Creating a config map from a file You can create a config map from a file by using the --from-file flag. You can pass the --from-file option multiple times to the CLI. You can also specify the key to set in a config map for content imported from a file by passing a key=value expression to the --from-file option. For example: USD oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties Note If you create a config map from a file, you can include files containing non-UTF8 data that are placed in this field without corrupting the non-UTF8 data. OpenShift Container Platform detects binary files and transparently encodes the file as MIME . On the server, the MIME payload is decoded and stored without corrupting the data. Prerequisite You must have a directory with files that contain the data you want to populate a config map with. The following procedure uses these example files: game.properties and ui.properties : USD cat example-files/game.properties Example output enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 USD cat example-files/ui.properties Example output color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice Procedure Create a config map by specifying a specific file: USD oc create configmap game-config-2 \ --from-file=example-files/game.properties \ --from-file=example-files/ui.properties Create a config map by specifying a key-value pair: USD oc create configmap game-config-3 \ --from-file=game-special-key=example-files/game.properties Verification Enter the oc get command for the object with the -o option to see the values of the keys from the file: USD oc get configmaps game-config-2 -o yaml Example output apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: "516" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985 Enter the oc get command for the object with the -o option to see the values of the keys from the key-value pair: USD oc get configmaps game-config-3 -o yaml Example output apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: "530" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985 1 This is the key that you set in the preceding step. 2.9.3.3. Creating a config map from literal values You can supply literal values for a config map. The --from-literal option takes a key=value syntax, which allows literal values to be supplied directly on the command line. Procedure Create a config map by specifying a literal value: USD oc create configmap special-config \ --from-literal=special.how=very \ --from-literal=special.type=charm Verification Enter the oc get command for the object with the -o option to see the values of the keys: USD oc get configmaps special-config -o yaml Example output apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: "651" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985 2.9.4. Use cases: Consuming config maps in pods The following sections describe some uses cases when consuming ConfigMap objects in pods. 2.9.4.1. Populating environment variables in containers by using config maps You can use config maps to populate individual environment variables in containers or to populate environment variables in containers from all keys that form valid environment variable names. As an example, consider the following config map: ConfigMap with two environment variables apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4 1 Name of the config map. 2 The project in which the config map resides. Config maps can only be referenced by pods in the same project. 3 4 Environment variables to inject. ConfigMap with one environment variable apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2 1 Name of the config map. 2 Environment variable to inject. Procedure You can consume the keys of this ConfigMap in a pod using configMapKeyRef sections. Sample Pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Stanza to pull the specified environment variables from a ConfigMap . 2 Name of a pod environment variable that you are injecting a key's value into. 3 5 Name of the ConfigMap to pull specific environment variables from. 4 6 Environment variable to pull from the ConfigMap . 7 Makes the environment variable optional. As optional, the pod will be started even if the specified ConfigMap and keys do not exist. 8 Stanza to pull all environment variables from a ConfigMap . 9 Name of the ConfigMap to pull all environment variables from. When this pod is run, the pod logs will include the following output: Note SPECIAL_TYPE_KEY=charm is not listed in the example output because optional: true is set. 2.9.4.2. Setting command-line arguments for container commands with config maps You can use a config map to set the value of the commands or arguments in a container by using the Kubernetes substitution syntax USD(VAR_NAME) . As an example, consider the following config map: apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure To inject values into a command in a container, you must consume the keys you want to use as environment variables. Then you can refer to them in a container's command using the USD(VAR_NAME) syntax. Sample pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Inject the values into a command in a container using the keys you want to use as environment variables. When this pod is run, the output from the echo command run in the test-container container is as follows: 2.9.4.3. Injecting content into a volume by using config maps You can inject content into a volume by using config maps. Example ConfigMap custom resource (CR) apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure You have a couple different options for injecting content into a volume by using config maps. The most basic way to inject content into a volume by using a config map is to populate the volume with files where the key is the file name and the content of the file is the value of the key: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/special.how" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never 1 File containing key. When this pod is run, the output of the cat command will be: You can also control the paths within the volume where config map keys are projected: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/path/to/special-key" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never 1 Path to config map key. When this pod is run, the output of the cat command will be: 2.10. Using device plugins to access external resources with pods Device plugins allow you to use a particular device type (GPU, InfiniBand, or other similar computing resources that require vendor-specific initialization and setup) in your OpenShift Container Platform pod without needing to write custom code. 2.10.1. Understanding device plugins The device plugin provides a consistent and portable solution to consume hardware devices across clusters. The device plugin provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. A device plugin is a gRPC service running on the nodes (external to the kubelet ) that is responsible for managing specific hardware resources. Any device plugin must support following remote procedure calls (RPCs): service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} } Example device plugins Nvidia GPU device plugin for COS-based operating system Nvidia official GPU device plugin Solarflare device plugin KubeVirt device plugins: vfio and kvm Kubernetes device plugin for IBM(R) Crypto Express (CEX) cards Note For easy device plugin reference implementation, there is a stub device plugin in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go . 2.10.1.1. Methods for deploying a device plugin Daemon sets are the recommended approach for device plugin deployments. Upon start, the device plugin will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager. Since device plugins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context. More specific details regarding deployment steps can be found with each device plugin implementation. 2.10.2. Understanding the Device Manager Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. You can advertise specialized hardware without requiring any upstream code changes. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. Device Manager advertises devices as Extended Resources . User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource . Upon start, the device plugin registers itself with Device Manager invoking Register on the /var/lib/kubelet/device-plugins/kubelet.sock and starts a gRPC service at /var/lib/kubelet/device-plugins/<plugin>.sock for serving Device Manager requests. Device Manager, while processing a new registration request, invokes ListAndWatch remote procedure call (RPC) at the device plugin service. In response, Device Manager gets a list of Device objects from the plugin over a gRPC stream. Device Manager will keep watching on the stream for new updates from the plugin. On the plugin side, the plugin will also keep the stream open and whenever there is a change in the state of any of the devices, a new device list is sent to the Device Manager over the same streaming connection. While handling a new pod admission request, Kubelet passes requested Extended Resources to the Device Manager for device allocation. Device Manager checks in its database to verify if a corresponding plugin exists or not. If the plugin exists and there are free allocatable devices as well as per local cache, Allocate RPC is invoked at that particular device plugin. Additionally, device plugins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation. 2.10.3. Enabling Device Manager Enable Device Manager to implement a device plugin to advertise specialized hardware without any upstream code changes. Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command. Perform one of the following steps: View the machine config: # oc describe machineconfig <name> For example: # oc describe machineconfig 00-worker Example output Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1 1 Label required for the Device Manager. Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a Device Manager CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3 1 Assign a name to CR. 2 Enter the label from the Machine Config Pool. 3 Set DevicePlugins to 'true`. Create the Device Manager: USD oc create -f devicemgr.yaml Example output kubeletconfig.machineconfiguration.openshift.io/devicemgr created Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plugin registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled. 2.11. Including pod priority in pod scheduling decisions You can enable pod priority and preemption in your cluster. Pod priority indicates the importance of a pod relative to other pods and queues the pods based on that priority. pod preemption allows the cluster to evict, or preempt, lower-priority pods so that higher-priority pods can be scheduled if there is no available space on a suitable node pod priority also affects the scheduling order of pods and out-of-resource eviction ordering on the node. To use priority and preemption, you create priority classes that define the relative weight of your pods. Then, reference a priority class in the pod specification to apply that weight for scheduling. 2.11.1. Understanding pod priority When you use the Pod Priority and Preemption feature, the scheduler orders pending pods by their priority, and a pending pod is placed ahead of other pending pods with lower priority in the scheduling queue. As a result, the higher priority pod might be scheduled sooner than pods with lower priority if its scheduling requirements are met. If a pod cannot be scheduled, scheduler continues to schedule other lower priority pods. 2.11.1.1. Pod priority classes You can assign pods a priority class, which is a non-namespaced object that defines a mapping from a name to the integer value of the priority. The higher the value, the higher the priority. A priority class object can take any 32-bit integer value smaller than or equal to 1000000000 (one billion). Reserve numbers larger than or equal to one billion for critical pods that must not be preempted or evicted. By default, OpenShift Container Platform has two reserved priority classes for critical system pods to have guaranteed scheduling. USD oc get priorityclasses Example output NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s system-node-critical - This priority class has a value of 2000001000 and is used for all pods that should never be evicted from a node. Examples of pods that have this priority class are ovnkube-node , and so forth. A number of critical components include the system-node-critical priority class by default, for example: master-api master-controller master-etcd ovn-kubernetes sync system-cluster-critical - This priority class has a value of 2000000000 (two billion) and is used with pods that are important for the cluster. Pods with this priority class can be evicted from a node in certain circumstances. For example, pods configured with the system-node-critical priority class can take priority. However, this priority class does ensure guaranteed scheduling. Examples of pods that can have this priority class are fluentd, add-on components like descheduler, and so forth. A number of critical components include the system-cluster-critical priority class by default, for example: fluentd metrics-server descheduler openshift-user-critical - You can use the priorityClassName field with important pods that cannot bind their resource consumption and do not have predictable resource consumption behavior. Prometheus pods under the openshift-monitoring and openshift-user-workload-monitoring namespaces use the openshift-user-critical priorityClassName . Monitoring workloads use system-critical as their first priorityClass , but this causes problems when monitoring uses excessive memory and the nodes cannot evict them. As a result, monitoring drops priority to give the scheduler flexibility, moving heavy workloads around to keep critical nodes operating. cluster-logging - This priority is used by Fluentd to make sure Fluentd pods are scheduled to nodes over other apps. 2.11.1.2. Pod priority names After you have one or more priority classes, you can create pods that specify a priority class name in a Pod spec. The priority admission controller uses the priority class name field to populate the integer value of the priority. If the named priority class is not found, the pod is rejected. 2.11.2. Understanding pod preemption When a developer creates a pod, the pod goes into a queue. If the developer configured the pod for pod priority or preemption, the scheduler picks a pod from the queue and tries to schedule the pod on a node. If the scheduler cannot find space on an appropriate node that satisfies all the specified requirements of the pod, preemption logic is triggered for the pending pod. When the scheduler preempts one or more pods on a node, the nominatedNodeName field of higher-priority Pod spec is set to the name of the node, along with the nodename field. The scheduler uses the nominatedNodeName field to keep track of the resources reserved for pods and also provides information to the user about preemptions in the clusters. After the scheduler preempts a lower-priority pod, the scheduler honors the graceful termination period of the pod. If another node becomes available while scheduler is waiting for the lower-priority pod to terminate, the scheduler can schedule the higher-priority pod on that node. As a result, the nominatedNodeName field and nodeName field of the Pod spec might be different. Also, if the scheduler preempts pods on a node and is waiting for termination, and a pod with a higher-priority pod than the pending pod needs to be scheduled, the scheduler can schedule the higher-priority pod instead. In such a case, the scheduler clears the nominatedNodeName of the pending pod, making the pod eligible for another node. Preemption does not necessarily remove all lower-priority pods from a node. The scheduler can schedule a pending pod by removing a portion of the lower-priority pods. The scheduler considers a node for pod preemption only if the pending pod can be scheduled on the node. 2.11.2.1. Non-preempting priority classes Pods with the preemption policy set to Never are placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled stays in the scheduling queue until sufficient resources are free and it can be scheduled. Non-preempting pods, like other pods, are subject to scheduler back-off. This means that if the scheduler tries unsuccessfully to schedule these pods, they are retried with lower frequency, allowing other pods with lower priority to be scheduled before them. Non-preempting pods can still be preempted by other, high-priority pods. 2.11.2.2. Pod preemption and other scheduler settings If you enable pod priority and preemption, consider your other scheduler settings: Pod priority and pod disruption budget A pod disruption budget specifies the minimum number or percentage of replicas that must be up at a time. If you specify pod disruption budgets, OpenShift Container Platform respects them when preempting pods at a best effort level. The scheduler attempts to preempt pods without violating the pod disruption budget. If no such pods are found, lower-priority pods might be preempted despite their pod disruption budget requirements. Pod priority and pod affinity Pod affinity requires a new pod to be scheduled on the same node as other pods with the same label. If a pending pod has inter-pod affinity with one or more of the lower-priority pods on a node, the scheduler cannot preempt the lower-priority pods without violating the affinity requirements. In this case, the scheduler looks for another node to schedule the pending pod. However, there is no guarantee that the scheduler can find an appropriate node and pending pod might not be scheduled. To prevent this situation, carefully configure pod affinity with equal-priority pods. 2.11.2.3. Graceful termination of preempted pods When preempting a pod, the scheduler waits for the pod graceful termination period to expire, allowing the pod to finish working and exit. If the pod does not exit after the period, the scheduler kills the pod. This graceful termination period creates a time gap between the point that the scheduler preempts the pod and the time when the pending pod can be scheduled on the node. To minimize this gap, configure a small graceful termination period for lower-priority pods. 2.11.3. Configuring priority and preemption You apply pod priority and preemption by creating a priority class object and associating pods to the priority by using the priorityClassName in your pod specs. Note You cannot add a priority class directly to an existing scheduled pod. Procedure To configure your cluster to use priority and preemption: Create one or more priority classes: Create a YAML file similar to the following: apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: "This priority class should be used for XYZ service pods only." 5 1 The name of the priority class object. 2 The priority value of the object. 3 Optional. Specifies whether this priority class is preempting or non-preempting. The preemption policy defaults to PreemptLowerPriority , which allows pods of that priority class to preempt lower-priority pods. If the preemption policy is set to Never , pods in that priority class are non-preempting. 4 Optional. Specifies whether this priority class should be used for pods without a priority class name specified. This field is false by default. Only one priority class with globalDefault set to true can exist in the cluster. If there is no priority class with globalDefault:true , the priority of pods with no priority class name is zero. Adding a priority class with globalDefault:true affects only pods created after the priority class is added and does not change the priorities of existing pods. 5 Optional. Describes which pods developers should use with this priority class. Enter an arbitrary text string. Create the priority class: USD oc create -f <file-name>.yaml Create a pod spec to include the name of a priority class: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] priorityClassName: high-priority 1 1 Specify the priority class to use with this pod. Create the pod: USD oc create -f <file-name>.yaml You can add the priority name directly to the pod configuration or to a pod template. 2.12. Placing pods on specific nodes using node selectors A node selector specifies a map of key-value pairs. The rules are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the indicated key-value pairs as the label on the node. If you are using node affinity and node selectors in the same pod configuration, see the important considerations below. 2.12.1. Using node selectors to control pod placement You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. To add node selectors to an existing pod, add a node selector to the controlling object for that pod, such as a ReplicaSet object, DaemonSet object, StatefulSet object, Deployment object, or DeploymentConfig object. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec. If the pod does not have a controlling object, you must delete the pod, edit the pod spec, and recreate the pod. Note You cannot add a node selector directly to an existing scheduled pod. Prerequisites To add a node selector to existing pods, determine the controlling object for that pod. For example, the router-default-66d5cf9464-m2g75 pod is controlled by the router-default-66d5cf9464 replica set: USD oc describe pod router-default-66d5cf9464-7pwkc Example output kind: Pod apiVersion: v1 metadata: # ... Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress # ... Controlled By: ReplicaSet/router-default-66d5cf9464 # ... The web console lists the controlling object under ownerReferences in the pod YAML: apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc # ... ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true # ... Procedure Add labels to a node by using a compute machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" # ... Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: # ... template: metadata: # ... spec: metadata: labels: region: east type: user-node # ... Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: "user-node" region: "east" # ... Verify that the labels are added to the node: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.30.3 Add the matching node selector to a pod: To add a node selector to existing and future pods, add a node selector to the controlling object for the pods: Example ReplicaSet object with labels kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 # ... spec: # ... template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1 # ... 1 Add the node selector. To add a node selector to a specific, new pod, add the selector to the Pod object directly: Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 # ... spec: nodeSelector: region: east type: user-node # ... Note You cannot add a node selector directly to an existing scheduled pod. 2.13. Run Once Duration Override Operator 2.13.1. Run Once Duration Override Operator overview You can use the Run Once Duration Override Operator to specify a maximum time limit that run-once pods can be active for. 2.13.1.1. About the Run Once Duration Override Operator OpenShift Container Platform relies on run-once pods to perform tasks such as deploying a pod or performing a build. Run-once pods are pods that have a RestartPolicy of Never or OnFailure . Cluster administrators can use the Run Once Duration Override Operator to force a limit on the time that those run-once pods can be active. After the time limit expires, the cluster will try to actively terminate those pods. The main reason to have such a limit is to prevent tasks such as builds to run for an excessive amount of time. To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. If both the run-once pod and the Run Once Duration Override Operator have their activeDeadlineSeconds value set, the lower of the two values is used. 2.13.2. Run Once Duration Override Operator release notes Cluster administrators can use the Run Once Duration Override Operator to force a limit on the time that run-once pods can be active. After the time limit expires, the cluster tries to terminate the run-once pods. The main reason to have such a limit is to prevent tasks such as builds to run for an excessive amount of time. To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. These release notes track the development of the Run Once Duration Override Operator for OpenShift Container Platform. For an overview of the Run Once Duration Override Operator, see About the Run Once Duration Override Operator . 2.13.2.1. Run Once Duration Override Operator 1.2.0 Issued: 16 October 2024 The following advisory is available for the Run Once Duration Override Operator 1.2.0: ( RHSA-2024:7548 ) 2.13.2.1.1. Bug fixes This release of the Run Once Duration Override Operator addresses several Common Vulnerabilities and Exposures (CVEs). 2.13.3. Overriding the active deadline for run-once pods You can use the Run Once Duration Override Operator to specify a maximum time limit that run-once pods can be active for. By enabling the run-once duration override on a namespace, all future run-once pods created or updated in that namespace have their activeDeadlineSeconds field set to the value specified by the Run Once Duration Override Operator. Note If both the run-once pod and the Run Once Duration Override Operator have their activeDeadlineSeconds value set, the lower of the two values is used. 2.13.3.1. Installing the Run Once Duration Override Operator You can use the web console to install the Run Once Duration Override Operator. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Run Once Duration Override Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-run-once-duration-override-operator in the Name field and click Create . Install the Run Once Duration Override Operator. Navigate to Operators OperatorHub . Enter Run Once Duration Override Operator into the filter box. Select the Run Once Duration Override Operator and click Install . On the Install Operator page: The Update channel is set to stable , which installs the latest stable release of the Run Once Duration Override Operator. Select A specific namespace on the cluster . Choose openshift-run-once-duration-override-operator from the dropdown menu under Installed namespace . Select an Update approval strategy. The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Create a RunOnceDurationOverride instance. From the Operators Installed Operators page, click Run Once Duration Override Operator . Select the Run Once Duration Override tab and click Create RunOnceDurationOverride . Edit the settings as necessary. Under the runOnceDurationOverride section, you can update the spec.activeDeadlineSeconds value, if required. The predefined value is 3600 seconds, or 1 hour. Click Create . Verification Log in to the OpenShift CLI. Verify all pods are created and running properly. USD oc get pods -n openshift-run-once-duration-override-operator Example output NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s 2.13.3.2. Enabling the run-once duration override on a namespace To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. Prerequisites The Run Once Duration Override Operator is installed. Procedure Log in to the OpenShift CLI. Add the label to enable the run-once duration override to your namespace: USD oc label namespace <namespace> \ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true 1 Specify the namespace to enable the run-once duration override on. After you enable the run-once duration override on this namespace, future run-once pods that are created in this namespace will have their activeDeadlineSeconds field set to the override value from the Run Once Duration Override Operator. Existing pods in this namespace will also have their activeDeadlineSeconds value set when they are updated . Verification Create a test run-once pod in the namespace that you enabled the run-once duration override on: apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done 1 Replace <namespace> with the name of your namespace. 2 The restartPolicy must be Never or OnFailure to be a run-once pod. Verify that the pod has its activeDeadlineSeconds field set: USD oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds Example output activeDeadlineSeconds: 3600 2.13.3.3. Updating the run-once active deadline override value You can customize the override value that the Run Once Duration Override Operator applies to run-once pods. The predefined value is 3600 seconds, or 1 hour. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed the Run Once Duration Override Operator. Procedure Log in to the OpenShift CLI. Edit the RunOnceDurationOverride resource: USD oc edit runoncedurationoverride cluster Update the activeDeadlineSeconds field: apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: # ... spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1 # ... 1 Set the activeDeadlineSeconds field to the desired value, in seconds. Save the file to apply the changes. Any future run-once pods created in namespaces where the run-once duration override is enabled will have their activeDeadlineSeconds field set to this new value. Existing run-once pods in these namespaces will receive this new value when they are updated. 2.13.4. Uninstalling the Run Once Duration Override Operator You can remove the Run Once Duration Override Operator from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 2.13.4.1. Uninstalling the Run Once Duration Override Operator You can use the web console to uninstall the Run Once Duration Override Operator. Uninstalling the Run Once Duration Override Operator does not unset the activeDeadlineSeconds field for run-once pods, but it will no longer apply the override value to future run-once pods. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have installed the Run Once Duration Override Operator. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Select openshift-run-once-duration-override-operator from the Project dropdown list. Delete the RunOnceDurationOverride instance. Click Run Once Duration Override Operator and select the Run Once Duration Override tab. Click the Options menu to the cluster entry and select Delete RunOnceDurationOverride . In the confirmation dialog, click Delete . Uninstall the Run Once Duration Override Operator Operator. Navigate to Operators Installed Operators . Click the Options menu to the Run Once Duration Override Operator entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 2.13.4.2. Uninstalling Run Once Duration Override Operator resources Optionally, after uninstalling the Run Once Duration Override Operator, you can remove its related resources from your cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have uninstalled the Run Once Duration Override Operator. Procedure Log in to the OpenShift Container Platform web console. Remove CRDs that were created when the Run Once Duration Override Operator was installed: Navigate to Administration CustomResourceDefinitions . Enter RunOnceDurationOverride in the Name field to filter the CRDs. Click the Options menu to the RunOnceDurationOverride CRD and select Delete CustomResourceDefinition . In the confirmation dialog, click Delete . Delete the openshift-run-once-duration-override-operator namespace. Navigate to Administration Namespaces . Enter openshift-run-once-duration-override-operator into the filter box. Click the Options menu to the openshift-run-once-duration-override-operator entry and select Delete Namespace . In the confirmation dialog, enter openshift-run-once-duration-override-operator and click Delete . Remove the run-once duration override label from the namespaces that it was enabled on. Navigate to Administration Namespaces . Select your namespace. Click Edit to the Labels field. Remove the runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true label and click Save . 2.14. Running pods in Linux user namespaces Linux user namespaces allow administrators to isolate the container user and group identifiers (UIDs and GIDs) so that a container can have a different set of permissions in the user namespace than on the host system where it is running. This allows containers to run processes with full privileges inside the user namespace, but the processes can be unprivileged for operations on the host machine. By default, a container runs in the host system's root user namespace. Running a container in the host user namespace can be useful when the container needs a feature that is available only in that user namespace. However, it introduces security concerns, such as the possibility of container breakouts, in which a process inside a container breaks out onto the host where the process can access or modify files on the host or in other containers. Running containers in individual user namespaces can mitigate container breakouts and several other vulnerabilities that a compromised container can pose to other pods and the node itself. You can configure Linux user namespace use by setting the hostUsers parameter to false in the pod spec, as shown in the following procedure. Important Support for Linux user namespaces is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.14.1. Configuring Linux user namespace support Prerequisites You enabled the required Technology Preview features for your cluster by editing the FeatureGate CR named cluster : USD oc edit featuregate cluster Example FeatureGate CR apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1 1 Enables the required UserNamespacesSupport and ProcMountType features. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them. Do not enable this feature set on production clusters. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. You enabled the crun container runtime on the worker nodes. crun is currently the only released OCI runtime with support for user namespaces. apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 containerRuntimeConfig: defaultRuntime: crun 2 1 Specifies the machine config pool label. 2 Specifies the container runtime to deploy. Procedure Edit the default user ID (UID) and group ID (GID) range of the OpenShift Container Platform namespace where your pod is deployed by running the following command: USD oc edit ns/<namespace_name> Example namespace apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/description: "" openshift.io/display-name: "" openshift.io/requester: system:admin openshift.io/sa.scc.mcs: s0:c27,c24 openshift.io/sa.scc.supplemental-groups: 1000/10000 1 openshift.io/sa.scc.uid-range: 1000/10000 2 # ... name: userns # ... 1 Edit the default GID to match the value you specified in the pod spec. The range for a Linux user namespace must be lower than 65,535. The default is 1000000000/10000 . 2 Edit the default UID to match the value you specified in the pod spec. The range for a Linux user namespace must be lower than 65,535. The default is 1000000000/10000 . Note The range 1000/10000 means 10,000 values starting with ID 1000, so it specifies the range of IDs from 1000 to 10,999. Enable the use of Linux user namespaces by creating a pod configured to run with a restricted profile and with the hostUsers parameter set to false . Create a YAML file similar to the following: Example pod specification apiVersion: v1 kind: Pod metadata: name: userns-pod # ... spec: containers: - name: userns-container image: registry.access.redhat.com/ubi9 command: ["sleep", "1000"] securityContext: capabilities: drop: ["ALL"] allowPrivilegeEscalation: false 1 runAsNonRoot: true 2 seccompProfile: type: RuntimeDefault runAsUser: 1000 3 runAsGroup: 1000 4 hostUsers: false 5 # ... 1 Specifies that a pod cannot request privilege escalation. This is required for the restricted-v2 security context constraints (SCC). 2 Specifies that the container will run with a user with any UID other than 0. 3 Specifies the UID the container is run with. 4 Specifies which primary GID the containers is run with. 5 Requests that the pod is to be run in a user namespace. If true , the pod runs in the host user namespace. If false , the pod runs in a new user namespace that is created for the pod. The default is true . Create the pod by running the following command: Verification Check the pod user and group IDs being used in the pod container you created. The pod is inside the Linux user namespace. Start a shell session with the container in your pod: USD oc rsh -c <container_name> pod/<pod_name> Example command USD oc rsh -c userns-container_name pod/userns-pod Display the user and group IDs being used inside the container: sh-5.1USD id Example output uid=1000(1000) gid=1000(1000) groups=1000(1000) Display the user ID being used in the container user namespace: sh-5.1USD lsns -t user Example output NS TYPE NPROCS PID USER COMMAND 4026532447 user 3 1 1000 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 1 1 The UID for the process is 1000 , the same as you set in the pod spec. Check the pod user ID being used on the node where the pod was created. The node is outside of the Linux user namespace. This user ID should be different from the UID being used in the container. Start a debug session for that node: USD oc debug node/ci-ln-z5vppzb-72292-8zp2b-worker-c-q8sh9 Example command USD oc debug node/ci-ln-z5vppzb-72292-8zp2b-worker-c-q8sh9 Set /host as the root directory within the debug shell: sh-5.1# chroot /host Display the user ID being used in the node user namespace: sh-5.1# lsns -t user Example command NS TYPE NPROCS PID USER COMMAND 4026531837 user 233 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 28 4026532447 user 1 4767 2908816384 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 1 1 The UID for the process is 2908816384 , which is different from what you set in the pod spec.
[ "kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - \"1000000\" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: [\"ALL\"] resources: limits: memory: \"100Mi\" cpu: \"1\" requests: memory: \"100Mi\" cpu: \"1\" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi", "oc project <project-name>", "oc get pods", "oc get pods", "NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>", "oc adm top pods", "oc adm top pods -n openshift-console", "NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi", "oc adm top pod --selector=''", "oc adm top pod --selector='name=my-pod'", "oc logs -f <pod_name> -c <container_name>", "oc logs ruby-58cd97df55-mww7r", "oc logs -f ruby-57f7f4855b-znl92 -c ruby", "oc logs <object_type>/<resource_name> 1", "oc logs deployment/ruby", "{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }", "oc create -f <file_or_dir_path>", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod", "oc create -f </path/to/file> -n <project_name>", "apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1", "oc create -f pod-disruption-budget.yaml", "apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1", "oc create -f <file-name>.yaml", "oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75", "horizontalpodautoscaler.autoscaling/hello-node autoscaled", "apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hello-node namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0", "oc get deployment hello-node", "NAME REVISION DESIRED CURRENT TRIGGERED BY hello-node 1 5 5 config", "type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60", "behavior: scaleDown: stabilizationWindowSeconds: 300", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: minReplicas: 20 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled", "oc edit hpa hpa-resource-metrics-memory", "apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior: '{\"ScaleUp\":{\"StabilizationWindowSeconds\":0,\"SelectPolicy\":\"Max\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":15},{\"Type\":\"Percent\",\"Value\":100,\"PeriodSeconds\":15}]}, \"ScaleDown\":{\"StabilizationWindowSeconds\":300,\"SelectPolicy\":\"Min\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":60},{\"Type\":\"Percent\",\"Value\":10,\"PeriodSeconds\":60}]}}'", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "oc autoscale <object_type>/<name> \\ 1 --min <number> \\ 2 --max <number> \\ 3 --cpu-percent=<percent> 4", "oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75", "apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11", "oc create -f <file-name>.yaml", "oc get hpa cpu-autoscale", "NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler", "Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none>", "apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max", "apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max", "oc create -f <file-name>.yaml", "oc create -f hpa.yaml", "horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created", "oc get hpa hpa-resource-metrics-memory", "NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m", "oc describe hpa hpa-resource-metrics-memory", "Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target", "oc describe hpa cm-test", "Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events:", "Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind \"ReplicationController\" in group \"apps\" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind \"ReplicationController\" in group \"apps\"", "Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API", "Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "oc describe hpa <pod-name>", "oc describe hpa cm-test", "Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range", "oc get all -n openshift-vertical-pod-autoscaler", "NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none>", "oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"", "oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 3", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 1 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 3 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\"", "oc get pods -n openshift-vertical-pod-autoscaler -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none>", "resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi", "resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k", "oc get vpa <vpa-name> --output yaml", "status: recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: \"2021-04-21T19:29:49Z\" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: \"142172\" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Initial\" 3", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Off\" 3", "oc get vpa <vpa-name> --output yaml", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\"", "spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi", "spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: 1 container: args: 2 - '--kube-api-qps=50.0' - '--kube-api-burst=100.0' resources: requests: 3 cpu: 40m memory: 150Mi limits: memory: 300Mi recommender: 4 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' - '--memory-saver=true' 5 resources: requests: cpu: 75m memory: 275Mi limits: memory: 550Mi updater: 6 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' resources: requests: cpu: 80m memory: 350M limits: memory: 700Mi minReplicas: 2 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15", "apiVersion: v1 kind: Pod metadata: name: vpa-updater-default-d65ffb9dc-hgw44 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --min-replicas=2 - --kube-api-qps=60.0 - --kube-api-burst=120.0 resources: requests: cpu: 80m memory: 350M", "apiVersion: v1 kind: Pod metadata: name: vpa-admission-plugin-default-756999448c-l7tsd namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --tls-cert-file=/data/tls-certs/tls.crt - --tls-private-key=/data/tls-certs/tls.key - --client-ca-file=/data/tls-ca-certs/service-ca.crt - --webhook-timeout-seconds=10 - --kube-api-qps=50.0 - --kube-api-burst=100.0 resources: requests: cpu: 40m memory: 150Mi", "apiVersion: v1 kind: Pod metadata: name: vpa-recommender-default-74c979dbbc-znrd2 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --recommendation-margin-fraction=0.15 - --pod-recommendation-min-cpu-millicores=25 - --pod-recommendation-min-memory-mb=250 - --kube-api-qps=60.0 - --kube-api-burst=120.0 - --memory-saver=true resources: requests: cpu: 75m memory: 275Mi", "apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name>", "apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true", "oc get pods", "NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: \"apps/v1\" kind: Deployment 2 name: frontend", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\" recommenders: 5 - name: my-recommender", "oc create -f <file-name>.yaml", "oc get vpa <vpa-name> --output yaml", "status: recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"", "apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: scalablepods.testing.openshift.io spec: group: testing.openshift.io versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: replicas: type: integer minimum: 0 selector: type: string status: type: object properties: replicas: type: integer subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .spec.selector 1 scope: Namespaced names: plural: scalablepods singular: scalablepod kind: ScalablePod shortNames: - spod", "apiVersion: testing.openshift.io/v1 kind: ScalablePod metadata: name: scalable-cr namespace: default spec: selector: \"app=scalable-cr\" 1 replicas: 1", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: scalable-cr namespace: default spec: targetRef: apiVersion: testing.openshift.io/v1 kind: ScalablePod name: scalable-cr updatePolicy: updateMode: \"Auto\"", "oc delete namespace openshift-vertical-pod-autoscaler", "oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io", "oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io", "oc delete crd verticalpodautoscalers.autoscaling.k8s.io", "oc delete MutatingWebhookConfiguration vpa-webhook-config", "oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler", "apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5", "apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB", "apiVersion: v1 kind: ServiceAccount secrets: - name: test-secret", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest'", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: \"sa-name\" 1 type: kubernetes.io/service-account-token 2", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password>", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com", "oc create sa <service_account_name> -n <your_namespace>", "apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: \"sa-name\" 2 type: kubernetes.io/service-account-token 3", "oc apply -f service-account-token-secret.yaml", "oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1", "ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA", "curl -X GET <openshift_cluster_api> --header \"Authorization: Bearer <token>\" 1 2", "apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1", "kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376", "oc create -f <file-name>.yaml", "oc get secrets", "NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m", "oc describe secret my-cert", "Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes", "apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: \"/etc/my-path\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511", "secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60", "oc delete secret <secret_name>", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers", "oc apply -f aws-provider.yaml", "mkdir credentialsrequest-dir-aws", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"secretsmanager:GetSecretValue\" - \"secretsmanager:DescribeSecret\" effect: Allow resource: \"arn:*:secretsmanager:*:*:secret:testSecret-??????\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider", "oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'", "https://<oidc_provider_name>", "ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output", "2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds", "oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testSecret\" objectType: \"secretsmanager\"", "oc create -f secret-provider-class-aws.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3", "oc create -f deployment.yaml", "oc exec my-aws-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testSecret", "oc exec my-aws-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret", "<secret_value>", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers", "oc apply -f aws-provider.yaml", "mkdir credentialsrequest-dir-aws", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"ssm:GetParameter\" - \"ssm:GetParameters\" effect: Allow resource: \"arn:*:ssm:*:*:parameter/testParameter*\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider", "oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'", "https://<oidc_provider_name>", "ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output", "2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds", "oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testParameter\" objectType: \"ssmparameter\"", "oc create -f secret-provider-class-aws.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3", "oc create -f deployment.yaml", "oc exec my-aws-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testParameter", "oc exec my-aws-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret", "<secret_value>", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-azure-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-azure-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-azure-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-azure labels: app: csi-secrets-store-provider-azure spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-azure template: metadata: labels: app: csi-secrets-store-provider-azure spec: serviceAccountName: csi-secrets-store-provider-azure hostNetwork: true containers: - name: provider-azure-installer image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.4.1 imagePullPolicy: IfNotPresent args: - --endpoint=unix:///provider/azure.sock - --construct-pem-chain=true - --healthz-port=8989 - --healthz-path=/healthz - --healthz-timeout=5s livenessProbe: httpGet: path: /healthz port: 8989 failureThreshold: 3 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 0 capabilities: drop: - ALL volumeMounts: - mountPath: \"/provider\" name: providervol affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: type operator: NotIn values: - virtual-kubelet volumes: - name: providervol hostPath: path: \"/var/run/secrets-store-csi-providers\" tolerations: - operator: Exists nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-azure -n openshift-cluster-csi-drivers", "oc apply -f azure-provider.yaml", "SERVICE_PRINCIPAL_CLIENT_SECRET=\"USD(az ad sp create-for-rbac --name https://USDKEYVAULT_NAME --query 'password' -otsv)\"", "SERVICE_PRINCIPAL_CLIENT_ID=\"USD(az ad sp list --display-name https://USDKEYVAULT_NAME --query '[0].appId' -otsv)\"", "oc create secret generic secrets-store-creds -n my-namespace --from-literal clientid=USD{SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=USD{SERVICE_PRINCIPAL_CLIENT_SECRET}", "oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider 1 namespace: my-namespace 2 spec: provider: azure 3 parameters: 4 usePodIdentity: \"false\" useVMManagedIdentity: \"false\" userAssignedIdentityID: \"\" keyvaultName: \"kvname\" objects: | array: - | objectName: secret1 objectType: secret tenantId: \"tid\"", "oc create -f secret-provider-class-azure.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-azure-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-azure-provider\" 3 nodePublishSecretRef: name: secrets-store-creds 4", "oc create -f deployment.yaml", "oc exec my-azure-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "secret1", "oc exec my-azure-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/secret1", "my-secret-value", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-gcp-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-gcp-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-gcp-role rules: - apiGroups: - \"\" resources: - serviceaccounts/token verbs: - create - apiGroups: - \"\" resources: - serviceaccounts verbs: - get --- apiVersion: apps/v1 kind: DaemonSet metadata: name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers labels: app: csi-secrets-store-provider-gcp spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-gcp template: metadata: labels: app: csi-secrets-store-provider-gcp spec: serviceAccountName: csi-secrets-store-provider-gcp initContainers: - name: chown-provider-mount image: busybox command: - chown - \"1000:1000\" - /etc/kubernetes/secrets-store-csi-providers volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol securityContext: privileged: true hostNetwork: false hostPID: false hostIPC: false containers: - name: provider image: us-docker.pkg.dev/secretmanager-csi/secrets-store-csi-driver-provider-gcp/plugin@sha256:a493a78bbb4ebce5f5de15acdccc6f4d19486eae9aa4fa529bb60ac112dd6650 securityContext: privileged: true imagePullPolicy: IfNotPresent resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi env: - name: TARGET_DIR value: \"/etc/kubernetes/secrets-store-csi-providers\" volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol mountPropagation: None readOnly: false livenessProbe: failureThreshold: 3 httpGet: path: /live port: 8095 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 volumes: - name: providervol hostPath: path: /etc/kubernetes/secrets-store-csi-providers tolerations: - key: kubernetes.io/arch operator: Equal value: amd64 effect: NoSchedule nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-gcp -n openshift-cluster-csi-drivers", "oc apply -f gcp-provider.yaml", "oc new-project my-namespace", "oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite", "oc create serviceaccount my-service-account --namespace=my-namespace", "oc create secret generic secrets-store-creds -n my-namespace --from-file=key.json 1", "oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-gcp-provider 1 namespace: my-namespace 2 spec: provider: gcp 3 parameters: 4 secrets: | - resourceName: \"projects/my-project/secrets/testsecret1/versions/1\" path: \"testsecret1.txt\"", "oc create -f secret-provider-class-gcp.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-gcp-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: my-service-account 3 containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-gcp-provider\" 4 nodePublishSecretRef: name: secrets-store-creds 5", "oc create -f deployment.yaml", "oc exec my-gcp-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testsecret1", "oc exec my-gcp-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testsecret1", "<secret_value>", "helm repo add hashicorp https://helm.releases.hashicorp.com", "helm repo update", "oc new-project vault", "oc label ns vault security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite", "oc adm policy add-scc-to-user privileged -z vault -n vault", "oc adm policy add-scc-to-user privileged -z vault-csi-provider -n vault", "helm install vault hashicorp/vault --namespace=vault --set \"server.dev.enabled=true\" --set \"injector.enabled=false\" --set \"csi.enabled=true\" --set \"global.openshift=true\" --set \"injector.agentImage.repository=docker.io/hashicorp/vault\" --set \"server.image.repository=docker.io/hashicorp/vault\" --set \"csi.image.repository=docker.io/hashicorp/vault-csi-provider\" --set \"csi.agent.image.repository=docker.io/hashicorp/vault\" --set \"csi.daemonSet.providersDir=/var/run/secrets-store-csi-providers\"", "oc patch daemonset -n vault vault-csi-provider --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/template/spec/containers/0/securityContext\", \"value\": {\"privileged\": true} }]'", "oc get pods -n vault", "NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 24m vault-csi-provider-87rgw 1/2 Running 0 5s vault-csi-provider-bd6hp 1/2 Running 0 4s vault-csi-provider-smlv7 1/2 Running 0 5s", "oc exec vault-0 --namespace=vault -- vault kv put secret/example1 testSecret1=my-secret-value", "oc exec vault-0 --namespace=vault -- vault kv get secret/example1", "= Secret Path = secret/data/example1 ======= Metadata ======= Key Value --- ----- created_time 2024-04-05T07:05:16.713911211Z custom_metadata <nil> deletion_time n/a destroyed false version 1 === Data === Key Value --- ----- testSecret1 my-secret-value", "oc exec vault-0 --namespace=vault -- vault auth enable kubernetes", "Success! Enabled kubernetes auth method at: kubernetes/", "TOKEN_REVIEWER_JWT=\"USD(oc exec vault-0 --namespace=vault -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)\"", "KUBERNETES_SERVICE_IP=\"USD(oc get svc kubernetes --namespace=default -o go-template=\"{{ .spec.clusterIP }}\")\"", "oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/config issuer=\"https://kubernetes.default.svc.cluster.local\" token_reviewer_jwt=\"USD{TOKEN_REVIEWER_JWT}\" kubernetes_host=\"https://USD{KUBERNETES_SERVICE_IP}:443\" kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt", "Success! Data written to: auth/kubernetes/config", "oc exec -i vault-0 --namespace=vault -- vault policy write csi -<<EOF path \"secret/data/*\" { capabilities = [\"read\"] } EOF", "Success! Uploaded policy: csi", "oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/role/csi bound_service_account_names=default bound_service_account_namespaces=default,test-ns,negative-test-ns,my-namespace policies=csi ttl=20m", "Success! Data written to: auth/kubernetes/role/csi", "oc get pods -n vault", "NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 43m vault-csi-provider-87rgw 2/2 Running 0 19m vault-csi-provider-bd6hp 2/2 Running 0 19m vault-csi-provider-smlv7 2/2 Running 0 19m", "oc get pods -n openshift-cluster-csi-drivers | grep -E \"secrets\"", "secrets-store-csi-driver-node-46d2g 3/3 Running 0 45m secrets-store-csi-driver-node-d2jjn 3/3 Running 0 45m secrets-store-csi-driver-node-drmt4 3/3 Running 0 45m secrets-store-csi-driver-node-j2wlt 3/3 Running 0 45m secrets-store-csi-driver-node-v9xv4 3/3 Running 0 45m secrets-store-csi-driver-node-vlz28 3/3 Running 0 45m secrets-store-csi-driver-operator-84bd699478-fpxrw 1/1 Running 0 47m", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-vault-provider 1 namespace: my-namespace 2 spec: provider: vault 3 parameters: 4 roleName: \"csi\" vaultAddress: \"http://vault.vault:8200\" objects: | - secretPath: \"secret/data/example1\" objectName: \"testSecret1\" secretKey: \"testSecret1", "oc create -f secret-provider-class-vault.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: busybox-deployment 1 namespace: my-namespace 2 labels: app: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: terminationGracePeriodSeconds: 0 containers: - image: registry.k8s.io/e2e-test-images/busybox:1.29-4 name: busybox imagePullPolicy: IfNotPresent command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-vault-provider\" 3", "oc create -f deployment.yaml", "oc exec busybox-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testSecret1", "oc exec busybox-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret1", "my-secret-value", "oc edit secretproviderclass my-azure-provider 1", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider namespace: my-namespace spec: provider: azure secretObjects: 1 - secretName: tlssecret 2 type: kubernetes.io/tls 3 labels: environment: \"test\" data: - objectName: tlskey 4 key: tls.key 5 - objectName: tlscrt key: tls.crt parameters: usePodIdentity: \"false\" keyvaultName: \"kvname\" objects: | array: - | objectName: tlskey objectType: secret - | objectName: tlscrt objectType: secret tenantId: \"tid\"", "oc get secretproviderclasspodstatus <secret_provider_class_pod_status_name> -o yaml 1", "status: mounted: true objects: - id: secret/tlscrt version: f352293b97da4fa18d96a9528534cb33 - id: secret/tlskey version: 02534bc3d5df481cb138f8b2a13951ef podName: busybox-<hash> secretProviderClassName: my-azure-provider targetPath: /var/lib/kubelet/pods/f0d49c1e-c87a-4beb-888f-37798456a3e7/volumes/kubernetes.io~csi/secrets-store-inline/mount", "oc create serviceaccount <service_account_name>", "oc patch serviceaccount <service_account_name> -p '{\"metadata\": {\"annotations\": {\"cloud.google.com/workload-identity-provider\": \"projects/<project_number>/locations/global/workloadIdentityPools/<identity_pool>/providers/<identity_provider>\"}}}'", "oc patch serviceaccount <service_account_name> -p '{\"metadata\": {\"annotations\": {\"cloud.google.com/service-account-email\": \"<service_account_email>\"}}}'", "oc patch serviceaccount <service_account_name> -p '{\"metadata\": {\"annotations\": {\"cloud.google.com/injection-mode\": \"direct\"}}}'", "gcloud projects add-iam-policy-binding <project_id> --member \"<service_account_email>\" --role \"projects/<project_id>/roles/<role_for_workload_permissions>\"", "oc get serviceaccount <service_account_name>", "apiVersion: v1 kind: ServiceAccount metadata: name: app-x namespace: service-a annotations: cloud.google.com/workload-identity-provider: \"projects/<project_number>/locations/global/workloadIdentityPools/<identity_pool>/providers/<identity_provider>\" 1 cloud.google.com/service-account-email: \"[email protected]\" cloud.google.com/audience: \"sts.googleapis.com\" 2 cloud.google.com/token-expiration: \"86400\" 3 cloud.google.com/gcloud-run-as-user: \"1000\" cloud.google.com/injection-mode: \"direct\" 4", "apiVersion: apps/v1 kind: Deployment metadata: name: ubi9 spec: replicas: 1 selector: matchLabels: app: ubi9 template: metadata: labels: app: ubi9 spec: serviceAccountName: \"<service_account_name>\" 1 containers: - name: ubi image: 'registry.access.redhat.com/ubi9/ubi-micro:latest' command: - /bin/sh - '-c' - | sleep infinity", "oc apply -f deployment.yaml", "oc get pods -o json | jq -r '.items[0].spec.containers[0].env[] | select(.name==\"GOOGLE_APPLICATION_CREDENTIALS\")'", "{ \"name\": \"GOOGLE_APPLICATION_CREDENTIALS\", \"value\": \"/var/run/secrets/workload-identity/federation.json\" }", "apiVersion: v1 kind: Pod metadata: name: app-x-pod namespace: service-a annotations: cloud.google.com/skip-containers: \"init-first,sidecar\" cloud.google.com/external-credentials-json: |- 1 { \"type\": \"external_account\", \"audience\": \"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/on-prem-kubernetes/providers/<identity_provider>\", \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/[email protected]:generateAccessToken\", \"credential_source\": { \"file\": \"/var/run/secrets/sts.googleapis.com/serviceaccount/token\", \"format\": { \"type\": \"text\" } } } spec: serviceAccountName: app-x initContainers: - name: init-first image: container-image:version containers: - name: sidecar image: container-image:version - name: container-name image: container-image:version env: 2 - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/run/secrets/gcloud/config/federation.json - name: CLOUDSDK_COMPUTE_REGION value: asia-northeast1 volumeMounts: - name: gcp-iam-token readOnly: true mountPath: /var/run/secrets/sts.googleapis.com/serviceaccount - mountPath: /var/run/secrets/gcloud/config name: external-credential-config readOnly: true volumes: - name: gcp-iam-token projected: sources: - serviceAccountToken: audience: sts.googleapis.com expirationSeconds: 86400 path: token - downwardAPI: defaultMode: 288 items: - fieldRef: apiVersion: v1 fieldPath: metadata.annotations['cloud.google.com/external-credentials-json'] path: federation.json name: external-credential-config", "kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2", "oc create configmap <configmap_name> [options]", "oc create configmap game-config --from-file=example-files/", "oc describe configmaps game-config", "Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes", "cat example-files/game.properties", "enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30", "cat example-files/ui.properties", "color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice", "oc create configmap game-config --from-file=example-files/", "oc get configmaps game-config -o yaml", "apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: \"407\" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985", "oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties", "cat example-files/game.properties", "enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30", "cat example-files/ui.properties", "color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice", "oc create configmap game-config-2 --from-file=example-files/game.properties --from-file=example-files/ui.properties", "oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties", "oc get configmaps game-config-2 -o yaml", "apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: \"516\" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985", "oc get configmaps game-config-3 -o yaml", "apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: \"530\" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985", "oc create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm", "oc get configmaps special-config -o yaml", "apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: \"651\" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985", "apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4", "apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "SPECIAL_LEVEL_KEY=very log_level=INFO", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "very charm", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never", "very", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never", "very", "service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }", "oc describe machineconfig <name>", "oc describe machineconfig 00-worker", "Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3", "oc create -f devicemgr.yaml", "kubeletconfig.machineconfiguration.openshift.io/devicemgr created", "oc get priorityclasses", "NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s", "apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: \"This priority class should be used for XYZ service pods only.\" 5", "oc create -f <file-name>.yaml", "apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] priorityClassName: high-priority 1", "oc create -f <file-name>.yaml", "oc describe pod router-default-66d5cf9464-7pwkc", "kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464", "apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api", "oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc label nodes <name> <key>=<value>", "oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.30.3", "kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1", "apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node", "oc get pods -n openshift-run-once-duration-override-operator", "NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s", "oc label namespace <namespace> \\ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true", "apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done", "oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds", "activeDeadlineSeconds: 3600", "oc edit runoncedurationoverride cluster", "apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 containerRuntimeConfig: defaultRuntime: crun 2", "oc edit ns/<namespace_name>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: system:admin openshift.io/sa.scc.mcs: s0:c27,c24 openshift.io/sa.scc.supplemental-groups: 1000/10000 1 openshift.io/sa.scc.uid-range: 1000/10000 2 name: userns", "apiVersion: v1 kind: Pod metadata: name: userns-pod spec: containers: - name: userns-container image: registry.access.redhat.com/ubi9 command: [\"sleep\", \"1000\"] securityContext: capabilities: drop: [\"ALL\"] allowPrivilegeEscalation: false 1 runAsNonRoot: true 2 seccompProfile: type: RuntimeDefault runAsUser: 1000 3 runAsGroup: 1000 4 hostUsers: false 5", "oc create -f <file_name>.yaml", "oc rsh -c <container_name> pod/<pod_name>", "oc rsh -c userns-container_name pod/userns-pod", "sh-5.1USD id", "uid=1000(1000) gid=1000(1000) groups=1000(1000)", "sh-5.1USD lsns -t user", "NS TYPE NPROCS PID USER COMMAND 4026532447 user 3 1 1000 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 1", "oc debug node/ci-ln-z5vppzb-72292-8zp2b-worker-c-q8sh9", "oc debug node/ci-ln-z5vppzb-72292-8zp2b-worker-c-q8sh9", "sh-5.1# chroot /host", "sh-5.1# lsns -t user", "NS TYPE NPROCS PID USER COMMAND 4026531837 user 233 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 28 4026532447 user 1 4767 2908816384 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/nodes/working-with-pods
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/service_telemetry_framework_1.5/making-open-source-more-inclusive
Chapter 4. Image encryption
Chapter 4. Image encryption As a storage administrator, you can set a secret key that is used to encrypt a specific RBD image. Image level encryption is handled internally by RBD clients. Note The krbd module does not support image level encryption. Note You can use external tools such as dm-crypt or QEMU to encrypt an RBD image. Prerequisites A running Red Hat Ceph Storage 7 cluster. root level permissions. 4.1. Encryption format RBD images are not encrypted by default. You can encrypt an RBD image by formatting to one of the supported encryption formats. The format operation persists the encryption metadata to the RBD image. The encryption metadata includes information such as the encryption format and version, cipher algorithm and mode specifications, as well as the information used to secure the encryption key. The encryption key is protected by a user kept secret that is a passphrase, which is never stored as persistent data in the RBD image. The encryption format operation requires you to specify the encryption format, cipher algorithm, and mode specification as well as a passphrase. The encryption metadata is stored in the RBD image, currently as an encryption header that is written at the start of the raw image. This means that the effective image size of the encrypted image would be lower than the raw image size. Note Unless explicitly (re-)formatted, clones of an encrypted image are inherently encrypted using the same format and secret. Note Any data written to the RBD image before formatting might become unreadable, even though it might still occupy storage resources. RBD images with the journal feature enabled cannot be encrypted. 4.2. Encryption load By default, all RBD APIs treat encrypted RBD images the same way as unencrypted RBD images. You can read or write raw data anywhere in the image. Writing raw data into the image might risk the integrity of the encryption format. For example, the raw data could override the encryption metadata located at the beginning of the image. To safely perform encrypted Input/Output(I/O) or maintenance operations on the encrypted RBD image, an additional encryption load operation must be applied immediately after opening the image. The encryption load operation requires you to specify the encryption format and a passphrase for unlocking the encryption key for the image itself and each of its explicitly formatted ancestor images. All I/Os for the opened RBD image are encrypted or decrypted, for a cloned RBD image, this includes IOs for the parent images. The encryption key is stored in memory by the RBD client until the image is closed. Note Once the encryption is loaded on the RBD image, no other encryption load or format operation can be applied. Additionally, API calls for retrieving the RBD image size and the parent overlap using the opened image context returns the effective image size and the effective parent overlap respectively. The encryption is loaded automatically when mapping the RBD images as block devices through rbd-nbd . Note API calls for retrieving the image size and the parent overlap using the opened image context returns the effective image size and the effective parent overlap. Note If a clone of an encrypted image is explicitly formatted, flattening or shrinking of the cloned image ceases to be transparent since the parent data must be re-encrypted according to the cloned image format as it is copied from the parent snapshot. If encryption is not loaded before the flatten operation is issued, any parent data that was previously accessible in the cloned image might become unreadable. Note If a clone of an encrypted image is explicitly formatted, the operation of shrinking the cloned image ceases to be transparent. This is because, in scenarios such as the cloned image containing snapshots or the cloned image being shrunk to a size that is not aligned with the object size, the action of copying some data from the parent snapshot, similar to flattening is involved. If encryption is not loaded before the shrink operation is issued, any parent data that was previously accessible in the cloned image might become unreadable. 4.3. Supported formats Both Linux Unified Key Setup (LUKS) 1 and 2 are supported. The data layout is fully compliant with the LUKS specification. External LUKS compatible tools such as dm-crypt or QEMU can safely perform encrypted Input/Output (I/O) on encrypted RBD images. Additionally, you can import existing LUKS images created by external tools, by copying the raw LUKS data into the RBD image. Currently, only Advanced Encryption Standards (AES) 128 and 256 encryption algorithms are supported. xts-plain64 is currently the only supported encryption mode. To use the LUKS format, format the RBD image with the following command: Note You need to create a file named passphrase.txt and enter a passphrase. You can randomly generate the passphrase, which might contain NULL characters. If the passphrase ends with a newline character, it is stripped off. Syntax Example Note You can select either luks1 or luks encryption format. The encryption format operation generates a LUKS header and writes it at the start of the RBD image. A single keyslot is appended to the header. The keyslot holds a randomly generated encryption key, and is protected by the passphrase read from the passphrase file. By default, AES-256 in xts-plain64 mode, which is the current recommended mode and the default for other LUKS tools, is used. Adding or removing additional passphrases is currently not supported natively, but can be achieved using LUKS tools such as cryptsetup . The LUKS header size can vary that is upto 136MiB in LUKS, but it is usually upto 16MiB, dependent on the version of libcryptsetup installed. For optimal performance, the encryption format sets the data offset to be aligned with the image object size. For example, expect a minimum overhead of 8MiB if using an image configured with an 8MiB object size. In LUKS1, sectors, which are the minimal encryption units, are fixed at 512 bytes. LUKS2 supports larger sectors, and for better performance, the default sector size is set to the maximum of 4KiB. Writes which are either smaller than a sector, or are not aligned to a sector start, trigger a guarded read-modify-write chain on the client, with a considerable latency penalty. A batch of such unaligned writes can lead to I/O races which further deteriorates performance. Red Hat recommends to avoid using RBD encryption in cases where incoming writes cannot be guaranteed to be LUKS sector aligned. To map a LUKS encrypted image, run the following command: Syntax Example Note You can select either luks1 or luks2 encryption format. Note For security reasons, both the encryption format and encryption load operations are CPU-intensive, and might take a few seconds to complete. For encrypted I/O, assuming AES-NI is enabled, a relatively small microseconds latency might be added, as well as a small increase in CPU utilization. 4.4. Adding encryption format to images and clones Layered-client-side encryption is supported. The cloned images can be encrypted with their own format and passphrase, potentially different from that of the parent image. Add encryption format to images and clones with the rbd encryption format command. Given a LUKS2-formatted image, you can create both a LUKS2-formatted clone and a LUKS1-formatted clone. Prerequisites A running Red Hat Ceph Storage cluster with Block Device (RBD) configured. Root-level access to the node. Procedure Create a LUKS2-formatted image: Syntax Example The rbd resize command grows the image to compensate for the overhead associated with the LUKS2 header. With the LUKS2-formatted image, create a LUKS2-formatted clone with the same effective size: Syntax Example With the LUKS2-formatted image, create a LUKS1-formatted clone with the same effective size: Syntax Example Since LUKS1 header is usually smaller than LUKS2 header, the rbd resize command at the end shrinks the cloned image to get rid of unwanted space allowance. With the LUKS-1-formatted image, create a LUKS2-formatted clone with the same effective size: Syntax Example Since LUKS2 header is usually bigger than LUKS1 header, the rbd resize command at the beginning temporarily grows the parent image to reserve some extra space in the parent snapshot and consequently the cloned image. This is necessary to make all parent data accessible in the cloned image. The rbd resize command at the end shrinks the parent image back to its original size and does not impact the parent snapshot and the cloned image to get rid of the unused reserved space The same applies to creating a formatted clone of an unformatted image, since an unformatted image does not have a header at all. Additional Resources See the Configuring Ansible inventory location section in the Red Hat Ceph Storage Installation Guide for more details on adding clients to the cephadm-ansible inventory.
[ "rbd encryption format POOL_NAME / LUKS_IMAGE luks1|luks2 PASSPHRASE_FILE", "rbd encryption format pool1/luksimage1 luks1 passphrase.bin", "rbd device map -t nbd -o encryption-format=luks1|luks2,encryption-passphrase-file=passphrase.txt POOL_NAME / LUKS_IMAGE", "rbd device map -t nbd -o encryption-format=luks1,encryption-passphrase-file=passphrase.txt pool1/luksimage1", "rbd create --size SIZE POOL_NAME / LUKS_IMAGE rbd encryption format POOL_NAME / LUKS_IMAGE luks1|luks2 PASSPHRASE_FILE rbd resize --size 50G --encryption-passphrase-file PASSPHRASE_FILE POOL_NAME / LUKS_IMAGE", "rbd create --size 50G mypool/myimage rbd encryption format mypool/myimage luks2 passphrase.txt rbd resize --size 50G --encryption-passphrase-file passphrase.txt mypool/myimage", "rbd snap create POOL_NAME / IMAGE_NAME @ SNAP_NAME rbd snap protect POOL_NAME / IMAGE_NAME @ SNAP_NAME rbd clone POOL_NAME / IMAGE_NAME @ SNAP_NAME POOL_NAME / CLONE_NAME rbd encryption format POOL_NAME / CLONE_NAME luks1 CLONE_PASSPHRASE_FILE", "rbd snap create mypool/myimage@snap rbd snap protect mypool/myimage@snap rbd clone mypool/myimage@snap mypool/myclone rbd encryption format mypool/myclone luks1 clone-passphrase.bin", "rbd snap create POOL_NAME / IMAGE_NAME @ SNAP_NAME rbd snap protect POOL_NAME / IMAGE_NAME @ SNAP_NAME rbd clone POOL_NAME / IMAGE_NAME @ SNAP_NAME POOL_NAME / CLONE_NAME rbd encryption format POOL_NAME / CLONE_NAME luks1 CLONE_PASSPHRASE_FILE rbd resize --size SIZE --allow-shrink --encryption-passphrase-file CLONE_PASSPHRASE_FILE --encryption-passphrase-file PASSPHRASE_FILE POOL_NAME / CLONE_NAME", "rbd snap create mypool/myimage@snap rbd snap protect mypool/myimage@snap rbd clone mypool/myimage@snap mypool/myclone rbd encryption format mypool/myclone luks1 clone-passphrase.bin rbd resize --size 50G --allow-shrink --encryption-passphrase-file clone-passphrase.bin --encryption-passphrase-file passphrase.bin mypool/myclone", "rbd resize --size SIZE POOL_NAME / LUKS_IMAGE rbd snap create POOL_NAME / IMAGE_NAME @ SNAP_NAME rbd snap protect POOL_NAME / IMAGE_NAME @ SNAP_NAME rbd clone POOL_NAME / IMAGE_NAME @ SNAP_NAME POOL_NAME / CLONE_NAME rbd encryption format POOL_NAME / CLONE_NAME luks2 CLONE_PASSPHRASE_FILE rbd resize --size SIZE --allow-shrink --encryption-passphrase-file PASSPHRASE_FILE POOL_NAME / LUKS_IMAGE rbd resize --size SIZE --allow-shrink --encryption-passphrase-file CLONE_PASSPHRASE_FILE --encryption-passphrase-file PASSPHRASE_FILE POOL_NAME_/ CLONE_NAME", "rbd resize --size 51G mypool/myimage rbd snap create mypool/myimage@snap rbd snap protect mypool/myimage@snap rbd clone mypool/my-image@snap mypool/myclone rbd encryption format mypool/myclone luks2 clone-passphrase.bin rbd resize --size 50G --allow-shrink --encryption-passphrase-file passphrase.bin mypool/myimage rbd resize --size 50G --allow-shrink --encryption-passphrase-file clone-passphrase.bin --encryption-passphrase-file passphrase.bin mypool/myclone" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/block_device_guide/image-encryption
Appendix C. Anaconda UI specific commands
Appendix C. Anaconda UI specific commands C.1. Commands used in Anaconda The "pwpolicy" command is an Anaconda UI specific command that can be used only in the %anaconda section of the kickstart file. pwpolicy (optional) This command can be used to enforce a custom password policy, which specifies requirements for passwords created during installation, based on factors such as password length and strength. Replace name with either root , user or luks to enforce the policy for the root password, user passwords, or LUKS passphrase, respectively. The libpwquality library is used to check minimum password requirements (length and quality). You can use the pwscore and pwmake commands provided by the libpwquality package to check the quality score of a password, or to create a random password with a given score. See the pwscore(1) and pwmake(1) man page for details about these commands. Important This command can only be used inside the %anaconda section. --minlen= - Sets the minimum allowed password length, in characters. The default is 6 . --minquality= - Sets the minimum allowed password quality as defined by the libpwquality library. The default value is 1 . --strict - Enables strict password enforcement. Passwords which do not meet the requirements specified in --minquality= and --minlen= will not be accepted. This option is disabled by default. --notstrict - Passwords which do not meet the minimum quality requirements specified by the --minquality= and -minlen= options will be allowed, after Done is clicked twice. --emptyok - Allows the use of empty passwords. Enabled by default for user passwords. --notempty - Disallows the use of empty passwords. Enabled by default for the root password and the LUKS passphrase. --changesok - Allows changing the password in the user interface, even if the Kickstart file already specifies a password. Disabled by default. --nochanges - Disallows changing passwords which are already set in the Kickstart file. Enabled by default.
[ "pwpolicy name [--minlen= length ] [--minquality= quality ] [--strict|--nostrict] [--emptyok|--noempty] [--changesok|--nochanges]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/commands-for-anaconda
Chapter 5. Deprecated functionalities
Chapter 5. Deprecated functionalities None.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/3.16.0_release_notes_and_known_issues/deprecated-functionalities
Chapter 5. Upgrading the Migration Toolkit for Containers
Chapter 5. Upgrading the Migration Toolkit for Containers You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.10 by using Operator Lifecycle Manager. You can upgrade MTC on OpenShift Container Platform 4.5, and earlier versions, by reinstalling the legacy Migration Toolkit for Containers Operator. Important If you are upgrading from MTC version 1.3, you must perform an additional procedure to update the MigPlan custom resource (CR). 5.1. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform 4.10 You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.10 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform console, navigate to Operators Installed Operators . Operators that have a pending upgrade display an Upgrade available status. Click Migration Toolkit for Containers Operator . Click the Subscription tab. Any upgrades requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for upgrade and click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date . Click Workloads Pods to verify that the MTC pods are running. 5.2. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform versions 4.2 to 4.5 You can upgrade Migration Toolkit for Containers (MTC) on OpenShift Container Platform versions 4.2 to 4.5 by manually installing the legacy Migration Toolkit for Containers Operator. Prerequisites You must be logged in as a user with cluster-admin privileges. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials by entering the following command: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: USD podman cp USD(podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Replace the Migration Toolkit for Containers Operator by entering the following command: USD oc replace --force -f operator.yml Scale the migration-operator deployment to 0 to stop the deployment by entering the following command: USD oc scale -n openshift-migration --replicas=0 deployment/migration-operator Scale the migration-operator deployment to 1 to start the deployment and apply the changes by entering the following command: USD oc scale -n openshift-migration --replicas=1 deployment/migration-operator Verify that the migration-operator was upgraded by entering the following command: USD oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F ":" '{ print USDNF }' Download the controller.yml file by entering the following command: USD podman cp USD(podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Create the migration-controller object by entering the following command: USD oc create -f controller.yml Verify that the MTC pods are running by entering the following command: USD oc get pods -n openshift-migration 5.3. Upgrading MTC 1.3 to 1.7 If you are upgrading Migration Toolkit for Containers (MTC) version 1.3.x to 1.7, you must update the MigPlan custom resource (CR) manifest on the cluster on which the MigrationController pod is running. Because the indirectImageMigration and indirectVolumeMigration parameters do not exist in MTC 1.3, their default value in version 1.4 is false , which means that direct image migration and direct volume migration are enabled. Because the direct migration requirements are not fulfilled, the migration plan cannot reach a Ready state unless these parameter values are changed to true . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Log in to the cluster on which the MigrationController pod is running. Get the MigPlan CR manifest: USD oc get migplan <migplan> -o yaml -n openshift-migration Update the following parameter values and save the file as migplan.yaml : ... spec: indirectImageMigration: true indirectVolumeMigration: true Replace the MigPlan CR manifest to apply the changes: USD oc replace -f migplan.yaml -n openshift-migration Get the updated MigPlan CR manifest to verify the changes: USD oc get migplan <migplan> -o yaml -n openshift-migration
[ "podman login registry.redhat.io", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "oc replace --force -f operator.yml", "oc scale -n openshift-migration --replicas=0 deployment/migration-operator", "oc scale -n openshift-migration --replicas=1 deployment/migration-operator", "oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "oc create -f controller.yml", "oc get pods -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration", "spec: indirectImageMigration: true indirectVolumeMigration: true", "oc replace -f migplan.yaml -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/migration_toolkit_for_containers/upgrading-mtc
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) IBM Power clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Both internal and external OpenShift Data Foundation clusters are supported on IBM Power. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal-Attached Devices mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode using Red Hat Ceph Storage External mode
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_power/preface-ibm-power-systems
Release notes for Red Hat build of OpenJDK 8.0.362
Release notes for Red Hat build of OpenJDK 8.0.362 Red Hat build of OpenJDK 8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.362/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Add a reporter name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/kafka_configuration_tuning/proc-providing-feedback-on-redhat-documentation
Chapter 1. Using cost models and analyzing your usage
Chapter 1. Using cost models and analyzing your usage You can use cost models in cost management to apply a price to usage in your hybrid cloud environment, then distribute the costs to resources. 1.1. What is a cost model? It can be difficult to determine the real cost of cloud-based IT systems. Different integrations provide a variety of cost data and metrics, which make it complicated to calculate and accurately distribute costs. A cost model is a framework that cost management uses to determine the calculations to apply to costs. With a cost model, you can associate a price to metrics provided by your integrations and charge for utilization of resources. In some cases, costs are related to the raw costs of the infrastructure, while in other cases there is a price list that maps usage to costs. Your data must be normalized before you can add a markup to cover your overhead and distribute the charges to your resources or end customers. With a cost model, you can better align costs to utilization: customers that use a resource more will be charged more. A cost model can have multiple different integrations assigned to it, but a single integration can be mapped to only one cost model. 1.2. Cost model concepts The following terms are important for understanding the cost management cost model workflow: Cost model A cost model is a framework that defines the calculations that cost management uses for costs. Cost models use raw costs and metrics and can help with budgeting, accounting, visualization, and analysis. In cost management, cost models provide the basis for the cost information that users view. You can record, categorize, and allocate costs to specific customers, business units, or projects. 1.2.1. Cost model terminology Markup The portion of cost that is calculated by applying markup or discount to infrastructure raw cost in the cost management application Example: For a raw cost of USD100 and a markup of 10%, the markup would be USD10, and the cost would be USD110 (sum of both). Request The pod resources requested according to OpenShift Usage The pod resources that were used according to OpenShift Usage cost The portion of cost calculated by applying hourly or monthly price list rates to metrics Example: For a metric of 100 core-hours and a rate of USD1/core-hour, the usage cost would be USD100. Effective usage The pod resources used or requested each hour, whichever is higher Monthly cost The portion of cost calculated by applying monthly price list rates to metrics returned as part of usage cost. Monthly cost can be configured for OpenShift nodes or clusters in a cost model to account for subscription costs, for example. Monthly costs are currently shown as part of the usage costs in the cost management API and interface. Note These costs are not currently amortized over the reported period in the API, so when viewing a daily breakdown of cost, monthly costs will only show on the first of the month. Example: For an OpenShift cluster with 10 nodes at a rate of USD10,000 per node per month, the monthly cost would be USD100,000. Price list A list of rates used within the application's cost model to calculate the usage cost of resources Distributed costs The costs calculated by the cost model are distributed to higher level application concepts, such as project, account, service, and so on. How the costs are distributed depends on the cost type (infrastructure or supplementary) assigned in the cost model. Count (cluster) The distinct number of clusters that are identified during the month Count (nodes) The distinct number of nodes that are identified during the month Count (persistent volume claims) The distinct number of volume claims that are identified during the month 1.3. The cost model workflow The following diagram shows the cost model workflow that cost management uses to apply a price to metrics and inventory, normalize cost data from different integrations, apply a markup (or discount), then distribute the costs across the relevant resources. The cost model also helps differentiate raw costs from the costs used in cost management. Cost management collects cost data from several integrations: Inventory - All the resources that have ever run in your integration, including those that are no longer in use. For example, if your OpenShift Container Platform environment contains a node that is not in use, that node still costs USDx amount per month. There are several ways to collect inventory data into cost management: cost management can generate inventory from the AWS data export, Azure or Google Cloud export, or OpenShift Metering Operator reports. Metrics - A subset of the OpenShift inventory showing usage and consumption for each resource. Cloud raw costs - AWS, Azure, and Google Cloud provide regular reports to cost management listing consumption of resources and their costs, which cost management uses for calculations. As a result, configuring a custom price list is not necessary for cloud integrations. The cost model allows you to apply a markup or discount of your choice to account for other costs and overhead, and provides options for assigning a cost layer (infrastructure or supplementary) to costs: For OpenShift Container Platform integrations - Since the metrics and inventory data do not have a price assigned to usage, you must create and assign a price list to your integrations to determine the usage cost of these resources. A price list includes rates for storage, memory, CPU usage and requests, and clusters and nodes. For AWS, Azure, and Google Cloud integrations - You can create cost models for these integrations to account for any extra costs or overhead in your environment by applying a markup percentage, or a negative percentage to calculate a discount. Costs from integrations are then collected together and allocated as infrastructure cost and supplementary cost . The cost is then distributed to resources across the environment. Depending on your organization, these resources may include OpenShift projects, nodes, or clusters, and cloud integration services or accounts. You can also use tagging to distribute the costs by team, project, or group within your organization. Note For more information about configuring tagging for cost management, see Managing cost data using tagging . 1.4. Analyzing unused cluster and node capacity You can analyze your cluster usage with cost management by examining the unrequested capacity and unused capacity of your cluster. The unrequested capacity identifies how much of the requested resources are being used in the cluster. When this value is high, there are nodes in your cluster that are requesting more resources than it uses. You can find the nodes that are responsible and adjust your requests to make your cluster usage more efficient. However, the usage can exceed the amount requested, so the unused capacity can help you to understand if you should adjust the overall capacity. Request The pod resources requested, which OpenShift reports. Unrequested capacity is the requests minus the usage. Effective usage The pod resources used or requested each hour, whichever is higher. Unused capacity is the capacity minus the effective usage. For more information, see Section 1.2, "Cost model concepts" . Prerequisites You created a integration on Red Hat Hybrid Cloud Console . Procedure Log in to the Red Hat Hybrid Cloud Console . From the Services menu, click Spend Management Cost Management . To view unused cluster capacity and unrequested cluster capacity: In the Global Navigation, click Cost Management OpenShift . On the OpenShift details page, in the Group By menu, select Cluster . Filter by Cluster , then select a cluster from the results. On the Cost overview tab, you can see your unused capacity and unrequested capacity in the CPU tile. If the unrequested capacity is higher than your unused capacity, or the unrequested core-hours are too high as a percentage of capacity, you can search for the nodes in your cluster that are responsible. To view unused node capacity and unrequested node capacity: In the Global Navigation, click Cost Management OpenShift . On the OpenShift details page, in the Group By menu, select Node . Filter by Node , then select a node from the results. If the unrequested capacity is higher than your unused capacity, or the unrequested core-hours are too high as a percentage of capacity for your node, you can adjust them in your cloud service provider to optimize your cloud spending. 1.5. Understanding cost distribution in cost management Costs can belong to three different groups: Platform costs Costs that incur from running the OpenShift Container Platform. Platform costs include the cost of all projects with the label Default . These namespaces and projects contain openshift- or kube- in the name. These projects were not created by the user but are required for OpenShift to run. You can optionally add namespaces and projects to platform costs. For more information, see Section 1.5.3, "Adding OpenShift projects" . Worker unallocated costs Costs that represent any unused part of your worker node's usage and request capacity. Network unattributed costs Costs associated with ingress and egress network traffic for individual nodes. 1.5.1. Distributing costs To configure the distribution of platform and worker unallocated costs into your projects, you can set costs to Distribute or Do not distribute . When you create a cost model, the costs are set to Distribute by default. This default setting means that the cost of the Platform projects are set to zero. The costs distribute into your project costs according to the sum of the effective CPU or the Memory usage of your cost model. Most users use the default Distribute setting to track platform and worker unallocated costs for their organizations. If you instead set the costs to Do not distribute , the costs of each Platform project are displayed individually instead of spread across all of the projects. The worker unallocated cost still is calculated, but it appears as an individual project in the OpenShift details page. With this option, you cannot see how the costs would distribute to user projects. You can always distribute platform or worker unallocated costs independently of each other, or you can choose to distribute none of them. 1.5.2. Calculating costs Cost management uses effective usage to calculate both platform and worker unallocated costs, in addition to project costs. To distribute platform costs, cost management uses the following formula: (individual user project effective usage) / (sum of usage for all user project's effective usage) * (platform cost) To distribute worker unallocated costs, cost management uses the following formula: (individual user project effective usage) / (sum of usage for all user project's effective usage) * (worker unallocated cost) 1.5.3. Adding OpenShift projects In cost management, the Group named Platform has default projects that you cannot remove. These projects start with the prefixes openshift or kube and have a Default label in the OpenShift details page. You can add your own projects to the Platform group so that you have control over what is considered a platform cost. Any projects that have the cost of some Platform projects have the Overhead label. For example, you might have a cost that you consider overhead and that you want to show up as a platform cost. You can add the cost to your Platform projects to distribute the costs according to your cost model. Prerequisites You must have a cluster that has a cost model set to Distribute . Procedure To add OpenShift projects to the Platform group, complete the following steps: In Settings in cost management, click the Platform projects tab. Select a project to add to the Platform group. Click Add projects . The project now has the label Platform , but not the label Default . Verification Complete the following steps to verify that your costs are distributing properly: In cost management, click OpenShift to open the OpenShift Details page. Select the cluster whose project you edited in the steps. The project should display a cost of USD0 because you set the cost to distribute across all other projects. A project with the label Overhead includes the cost of that project plus the default project costs.
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/using_cost_models/assembly-using-cost-models
Chapter 16. GenericKafkaListenerConfigurationBroker schema reference
Chapter 16. GenericKafkaListenerConfigurationBroker schema reference Used in: GenericKafkaListenerConfiguration Full list of GenericKafkaListenerConfigurationBroker schema properties You can see example configuration for the nodePort , host , loadBalancerIP and annotations properties in the GenericKafkaListenerConfigurationBootstrap schema , which configures bootstrap service overrides. Advertised addresses for brokers By default, AMQ Streams tries to automatically determine the hostnames and ports that your Kafka cluster advertises to its clients. This is not sufficient in all situations, because the infrastructure on which AMQ Streams is running might not provide the right hostname or port through which Kafka can be accessed. You can specify a broker ID and customize the advertised hostname and port in the configuration property of the listener. AMQ Streams will then automatically configure the advertised address in the Kafka brokers and add it to the broker certificates so it can be used for TLS hostname verification. Overriding the advertised host and ports is available for all types of listeners. Example of an external route listener configured with overrides for advertised addresses listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342 # ... 16.1. GenericKafkaListenerConfigurationBroker schema properties Property Description broker ID of the kafka broker (broker identifier). Broker IDs start from 0 and correspond to the number of broker replicas. integer advertisedHost The host name used in the brokers' advertised.listeners . string advertisedPort The port number used in the brokers' advertised.listeners . integer host The broker host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with route (optional) or ingress (required) type listeners. string nodePort Node port for the per-broker service. This field can be used only with nodeport type listener. integer loadBalancerIP The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.This field can be used only with loadbalancer type listener. string annotations Annotations that will be added to the Ingress or Service resource. You can use this field to configure DNS providers such as External DNS. This field can be used only with loadbalancer , nodeport , or ingress type listeners. map labels Labels that will be added to the Ingress , Route , or Service resource. This field can be used only with loadbalancer , nodeport , route , or ingress type listeners. map
[ "listeners: # - name: external port: 9094 type: route tls: true authentication: type: tls configuration: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-generickafkalistenerconfigurationbroker-reference
Chapter 14. Installing on vSphere
Chapter 14. Installing on vSphere 14.1. Installing a cluster on vSphere In OpenShift Container Platform version 4.7, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 14.1.1. Prerequisites Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. Review details about the OpenShift Container Platform installation and update processes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 14.1.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 14.1.3. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 14.1. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 6.5U3 or vSphere 6.7U2 and later vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware's NSX Container Plug-in (NCP) is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 14.1.4. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 14.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 14.3. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 14.4. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 14.1.5. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 14.1. Roles and privileges required for installation vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.View vSphere vCenter Cluster Always Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone Folder.Create Folder.Delete Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 14.2. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Always True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss. Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 14.5. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 14.1.6. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 14.1.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14.1.8. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 14.1.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Paste the pull secret from the Red Hat OpenShift Cluster Manager . + When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. + .Example output + Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. + Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. + Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 14.1.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 14.1.10.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 14.1.10.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 14.1.10.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 14.1.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.1.12. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 14.1.12.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 14.1.12.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 14.1.12.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 14.1.12.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 14.1.13. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 14.1.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 14.1.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 14.2. Installing a cluster on vSphere with customizations In OpenShift Container Platform version 4.7, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 14.2.1. Prerequisites Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. Review details about the OpenShift Container Platform installation and update processes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 14.2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 14.2.3. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 14.6. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 6.5U3 or vSphere 6.7U2 and later vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware's NSX Container Plug-in (NCP) is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 14.2.4. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 14.7. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 14.8. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 14.9. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 14.2.5. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 14.3. Roles and privileges required for installation vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.View vSphere vCenter Cluster Always Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone Folder.Create Folder.Delete Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 14.4. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Always True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss. Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 14.10. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 14.2.6. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 14.2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14.2.8. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 14.2.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 14.2.9.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 14.2.9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 14.11. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 14.2.9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 14.12. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 14.2.9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 14.13. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 14.2.9.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 14.14. Additional VMware vSphere cluster parameters Parameter Description Values platform.vsphere.vCenter The fully-qualified hostname or IP address of the vCenter server. String platform.vsphere.username The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String platform.vsphere.password The password for the vCenter user name. String platform.vsphere.datacenter The name of the datacenter to use in the vCenter instance. String platform.vsphere.defaultDatastore The name of the default datastore to use for provisioning volumes. String platform.vsphere.folder Optional . The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . platform.vsphere.network The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String platform.vsphere.cluster The vCenter cluster to install the OpenShift Container Platform cluster in. String platform.vsphere.apiVIP The virtual IP (VIP) address that you configured for control plane API access. An IP address, for example 128.0.0.1 . platform.vsphere.ingressVIP The virtual IP (VIP) address that you configured for cluster ingress. An IP address, for example 128.0.0.1 . 14.2.9.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 14.15. Optional VMware vSphere machine pool parameters Parameter Description Values platform.vsphere.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . platform.vsphere.osDisk.diskSizeGB The size of the disk in gigabytes. Integer platform.vsphere.cpus The total number of virtual processor cores to assign a virtual machine. Integer platform.vsphere.coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value is 1 Integer platform.vsphere.memoryMB The size of a virtual machine's memory in megabytes. Integer 14.2.9.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 7 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 8 The cluster name that you specified in your DNS records. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. 14.2.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 14.2.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. + Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. + Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. + Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 14.2.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 14.2.11.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 14.2.11.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 14.2.11.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 14.2.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.2.13. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 14.2.13.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 14.2.13.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 14.2.13.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 14.2.13.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 14.2.14. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 14.2.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 14.2.16. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 14.3. Installing a cluster on vSphere with network customizations In OpenShift Container Platform version 4.7, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 14.3.1. Prerequisites Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. Review details about the OpenShift Container Platform installation and update processes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, confirm with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 14.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 14.3.3. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 14.16. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 6.5U3 or vSphere 6.7U2 and later vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware's NSX Container Plug-in (NCP) is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 14.3.4. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 14.17. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 14.18. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 14.19. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 14.3.5. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 14.5. Roles and privileges required for installation vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.View vSphere vCenter Cluster Always Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone Folder.Create Folder.Delete Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 14.6. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Always True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss. Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 14.20. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 14.3.6. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 14.3.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14.3.8. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 14.3.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 14.3.9.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 14.3.9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 14.21. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 14.3.9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 14.22. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 14.3.9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 14.23. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 14.3.9.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 14.24. Additional VMware vSphere cluster parameters Parameter Description Values platform.vsphere.vCenter The fully-qualified hostname or IP address of the vCenter server. String platform.vsphere.username The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String platform.vsphere.password The password for the vCenter user name. String platform.vsphere.datacenter The name of the datacenter to use in the vCenter instance. String platform.vsphere.defaultDatastore The name of the default datastore to use for provisioning volumes. String platform.vsphere.folder Optional . The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . platform.vsphere.network The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String platform.vsphere.cluster The vCenter cluster to install the OpenShift Container Platform cluster in. String platform.vsphere.apiVIP The virtual IP (VIP) address that you configured for control plane API access. An IP address, for example 128.0.0.1 . platform.vsphere.ingressVIP The virtual IP (VIP) address that you configured for cluster ingress. An IP address, for example 128.0.0.1 . 14.3.9.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 14.25. Optional VMware vSphere machine pool parameters Parameter Description Values platform.vsphere.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . platform.vsphere.osDisk.diskSizeGB The size of the disk in gigabytes. Integer platform.vsphere.cpus The total number of virtual processor cores to assign a virtual machine. Integer platform.vsphere.coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value is 1 Integer platform.vsphere.memoryMB The size of a virtual machine's memory in megabytes. Integer 14.3.9.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 7 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 8 The cluster name that you specified in your DNS records. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. 14.3.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 14.3.10. Network configuration phases When specifying a cluster configuration prior to installation, there are several phases in the installation procedures when you can modify the network configuration: Phase 1 After entering the openshift-install create install-config command. In the install-config.yaml file, you can customize the following network-related fields: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to "Installation configuration parameters". Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Phase 2 After entering the openshift-install create manifests command. If you must specify advanced network configuration, during this phase you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 14.3.11. Specifying advanced network configuration You can use advanced configuration customization to integrate your cluster into your existing network environment by specifying additional configuration for your cluster network provider. You can specify advanced network configuration only before you install the cluster. Important Modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and specify the advanced network configuration for your cluster, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. 14.3.12. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 14.3.12.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 14.26. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 This value is ready-only and specified in the install-config.yaml file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 This value is ready-only and specified in the install-config.yaml file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 14.27. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 14.28. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 14.29. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . This value cannot be changed after cluster installation. genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. Example OVN-Kubernetes configuration defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 14.30. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 14.3.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. + Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. + Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. + Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 14.3.14. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 14.3.14.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 14.3.14.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 14.3.14.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 14.3.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.3.16. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 14.3.16.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 14.3.16.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 14.3.16.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 14.3.16.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 14.3.17. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 14.3.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 14.3.19. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 14.4. Installing a cluster on vSphere with user-provisioned infrastructure In OpenShift Container Platform version 4.7, you can install a cluster on VMware vSphere infrastructure that you provision. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 14.4.1. Prerequisites Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. Review details about the OpenShift Container Platform installation and update processes. Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 14.4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 14.4.3. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 14.31. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 6.5U3 or vSphere 6.7U2 and later vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware's NSX Container Plug-in (NCP) is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 14.4.4. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. 14.4.4.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: One temporary bootstrap machine Three control plane, or master, machines At least two compute machines, which are also known as worker machines. Note The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 7.9. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . Important All virtual machines must reside in the same datastore and in the same folder as the installer. 14.4.4.2. Network connectivity requirements All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config files from the Machine Config Server. The machines are configured with static IP addresses. No DHCP server is required. Additionally, each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server. 14.4.4.3. IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. 14.4.4.4. Minimum resource requirements Each cluster machine must meet the following minimum requirements: Table 14.32. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 14.4.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 14.4.5. Creating the user-provisioned infrastructure Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure. Prerequisites Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Provision the required load balancers. Configure the ports for your machines. Configure DNS. Ensure network connectivity. 14.4.5.1. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. Ensure that the machines have persistent IP addresses and host names. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster. Table 14.33. All machines to all machines Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . TCP/UDP 30000 - 32767 Kubernetes node port Table 14.34. All machines to control plane Protocol Port Description TCP 6443 Kubernetes API Table 14.35. Control plane machines to control plane machines Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Network topology requirements The infrastructure that you provision for your cluster must meet the following network topology requirements. Important OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Load balancers Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 14.36. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Configure the following ports on both the front and back of the load balancers: Table 14.37. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTP traffic Tip If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. Ethernet adaptor hardware address requirements When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges: 00:05:69:00:00:00 to 00:05:69:FF:FF:FF 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF 00:50:56:00:00:00 to 00:50:56:FF:FF:FF If a MAC address outside the VMware OUI is used, the cluster installation will not succeed. NTP configuration OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 14.4.5.2. User-provisioned DNS requirements DNS is used for name resolution and reverse name resolution. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the host name for all the nodes. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 14.38. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. Add a wildcard DNS A/AAAA or CNAME record that refers to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Bootstrap bootstrap.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Master hosts <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes (also known as the master nodes). These records must be resolvable by the nodes within the cluster. Worker hosts <worker><n>.<cluster_name>.<base_domain>. Add DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Tip You can use the nslookup <hostname> command to verify name resolution. You can use the dig -x <ip_address> command to verify reverse name resolution for the PTR records. The following example of a BIND zone file shows sample A records for name resolution. The purpose of the example is to show the records that are needed. The example is not meant to provide advice for choosing one name resolution service over another. Example 14.7. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF The following example BIND zone file shows sample PTR records for reverse name resolution. Example 14.8. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is "last octet" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF 14.4.6. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster's machines. 14.4.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14.4.8. Manually creating the installation configuration file For installations of OpenShift Container Platform that use user-provisioned infrastructure, you manually generate your installation configuration file. Prerequisites Obtain the OpenShift Container Platform installation program and the access token for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the following install-config.yaml file template and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 14.4.8.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 7 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 The fully-qualified hostname or IP address of the vCenter server. 10 The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. 11 The password associated with the vSphere user. 12 The vSphere datacenter. 13 The default vSphere datastore to use. 14 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 The pull secret that you obtained from OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 14.4.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 14.4.9. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. + Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. The following files are generated in the directory: 14.4.10. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 14.4.11. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere Before you install a cluster that contains user-provisioned infrastructure on VMware vSphere, you must create RHCOS machines on vSphere hosts for it to use. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before booting a VM from an OVA in vSphere: USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High . Ensure that your VM's CPU and memory reservation have the following values: Memory reservation value must be equal to its configured memory size. CPU reservation value must be at least the number of low latency virtual CPUs multiplied by the measured physical CPU speed. Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the configuration and power on the VM. Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 14.4.12. Creating more Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere You can create more compute machines for your cluster that uses user-provisioned infrastructure on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . From the Latency Sensitivity list, select High . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 14.4.13. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important Kubernetes supports only two filesystem partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a MachineConfig object and add it to a file in the openshift directory. For example, name the file 98-var-partition.yaml , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The name of the mount unit must match the directory specified in the Where= directive. For example, for a filesystem mounted on /var/lib/containers , the unit must be named var-lib-containers.mount . 5 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 14.4.14. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 14.4.15. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 14.4.15.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 14.4.15.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 14.4.15.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 14.4.16. Creating the cluster To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Create the required infrastructure for the cluster. You obtained the installation program and generated the Ignition config files for your cluster. You used the Ignition config files to create RHCOS machines for your cluster. Your machines have direct Internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.20.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the machine itself. 14.4.17. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.4.18. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 14.4.19. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m Configure the Operators that are not available. 14.4.19.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 14.4.19.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 14.4.19.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 14.4.19.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 14.4.19.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 14.4.20. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation configuration documentation for more information. All the worker nodes are restarted. To monitor the process, enter the following command: USD oc get nodes -w Note If you have additional machine types such as infrastructure nodes, repeat the process for these types. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 14.4.21. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 14.4.22. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 14.4.23. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 14.5. Installing a cluster on vSphere with network customizations In OpenShift Container Platform version 4.7, you can install a cluster on VMware vSphere infrastructure that you provision with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 14.5.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. Verify that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you must configure it to access Red Hat Insights . 14.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 14.5.3. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 14.39. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 6.5U3 or vSphere 6.7U2 and later vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware's NSX Container Plug-in (NCP) is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 14.5.4. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. 14.5.4.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: One temporary bootstrap machine Three control plane, or master, machines At least two compute machines, which are also known as worker machines. Note The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 7.9. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . Important All virtual machines must reside in the same datastore and in the same folder as the installer. 14.5.4.2. Network connectivity requirements All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config files from the Machine Config Server. The machines are configured with static IP addresses. No DHCP server is required. Additionally, each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server. 14.5.4.3. IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. 14.5.4.4. Minimum resource requirements Each cluster machine must meet the following minimum requirements: Table 14.40. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 14.5.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 14.5.5. Creating the user-provisioned infrastructure Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure. Prerequisites Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Provision the required load balancers. Configure the ports for your machines. Configure DNS. Ensure network connectivity. 14.5.5.1. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. Ensure that the machines have persistent IP addresses and host names. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster. Table 14.41. All machines to all machines Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . TCP/UDP 30000 - 32767 Kubernetes node port Table 14.42. All machines to control plane Protocol Port Description TCP 6443 Kubernetes API Table 14.43. Control plane machines to control plane machines Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Network topology requirements The infrastructure that you provision for your cluster must meet the following network topology requirements. Important OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Load balancers Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 14.44. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Configure the following ports on both the front and back of the load balancers: Table 14.45. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTP traffic Tip If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. Ethernet adaptor hardware address requirements When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges: 00:05:69:00:00:00 to 00:05:69:FF:FF:FF 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF 00:50:56:00:00:00 to 00:50:56:FF:FF:FF If a MAC address outside the VMware OUI is used, the cluster installation will not succeed. NTP configuration OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 14.5.5.2. User-provisioned DNS requirements DNS is used for name resolution and reverse name resolution. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the host name for all the nodes. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 14.46. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. Add a wildcard DNS A/AAAA or CNAME record that refers to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Bootstrap bootstrap.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Master hosts <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes (also known as the master nodes). These records must be resolvable by the nodes within the cluster. Worker hosts <worker><n>.<cluster_name>.<base_domain>. Add DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Tip You can use the nslookup <hostname> command to verify name resolution. You can use the dig -x <ip_address> command to verify reverse name resolution for the PTR records. The following example of a BIND zone file shows sample A records for name resolution. The purpose of the example is to show the records that are needed. The example is not meant to provide advice for choosing one name resolution service over another. Example 14.9. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF The following example BIND zone file shows sample PTR records for reverse name resolution. Example 14.10. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is "last octet" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF 14.5.6. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 14.5.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14.5.8. Manually creating the installation configuration file For installations of OpenShift Container Platform that use user-provisioned infrastructure, you manually generate your installation configuration file. Prerequisites Obtain the OpenShift Container Platform installation program and the access token for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the following install-config.yaml file template and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 14.5.8.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 7 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 The fully-qualified hostname or IP address of the vCenter server. 10 The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. 11 The password associated with the vSphere user. 12 The vSphere datacenter. 13 The default vSphere datastore to use. 14 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 The pull secret that you obtained from OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 14.5.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 14.5.9. Network configuration phases When specifying a cluster configuration prior to installation, there are several phases in the installation procedures when you can modify the network configuration: Phase 1 After entering the openshift-install create install-config command. In the install-config.yaml file, you can customize the following network-related fields: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to "Installation configuration parameters". Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Phase 2 After entering the openshift-install create manifests command. If you must specify advanced network configuration, during this phase you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 14.5.10. Specifying advanced network configuration You can use advanced configuration customization to integrate your cluster into your existing network environment by specifying additional configuration for your cluster network provider. You can specify advanced network configuration only before you install the cluster. Important Modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites Create the install-config.yaml file and complete any modifications to it. Create the Ignition config files for your cluster. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and specify the advanced network configuration for your cluster, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Remove the Kubernetes manifest files that define the control plane machines and compute machineSets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 14.5.11. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 14.5.11.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 14.47. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 This value is ready-only and specified in the install-config.yaml file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 This value is ready-only and specified in the install-config.yaml file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 14.48. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 14.49. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 14.50. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . This value cannot be changed after cluster installation. genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. Example OVN-Kubernetes configuration defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 14.51. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 14.5.12. Creating the Ignition config files Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Obtain the Ignition config files: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important If you created an install-config.yaml file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. The following files are generated in the directory: 14.5.13. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 14.5.14. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere Before you install a cluster that contains user-provisioned infrastructure on VMware vSphere, you must create RHCOS machines on vSphere hosts for it to use. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before booting a VM from an OVA in vSphere: USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High . Ensure that your VM's CPU and memory reservation have the following values: Memory reservation value must be equal to its configured memory size. CPU reservation value must be at least the number of low latency virtual CPUs multiplied by the measured physical CPU speed. Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the configuration and power on the VM. Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 14.5.15. Creating more Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere You can create more compute machines for your cluster that uses user-provisioned infrastructure on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . From the Latency Sensitivity list, select High . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 14.5.16. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important Kubernetes supports only two filesystem partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a MachineConfig object and add it to a file in the openshift directory. For example, name the file 98-var-partition.yaml , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The name of the mount unit must match the directory specified in the Where= directive. For example, for a filesystem mounted on /var/lib/containers , the unit must be named var-lib-containers.mount . 5 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 14.5.17. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 14.5.18. Creating the cluster To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Create the required infrastructure for the cluster. You obtained the installation program and generated the Ignition config files for your cluster. You used the Ignition config files to create RHCOS machines for your cluster. Your machines have direct Internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.20.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the machine itself. 14.5.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.5.20. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 14.5.20.1. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m Configure the Operators that are not available. 14.5.20.2. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 14.5.20.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 14.5.20.3.1. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 14.5.21. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation configuration documentation for more information. All the worker nodes are restarted. To monitor the process, enter the following command: USD oc get nodes -w Note If you have additional machine types such as infrastructure nodes, repeat the process for these types. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 14.5.22. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 14.5.23. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 14.5.24. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 14.6. Installing a cluster on vSphere in a restricted network In OpenShift Container Platform 4.7, you can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 14.6.1. Prerequisites Create a registry on your mirror host and obtain the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide the ReadWriteMany access mode. Review details about the OpenShift Container Platform installation and update processes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall and plan to use telemetry, you must configure the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. 14.6.2. About installations in restricted networks In OpenShift Container Platform 4.7, you can perform an installation that does not require an active connection to the Internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less Internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the Internet and your closed network, or by using other methods that meet your restrictions. 14.6.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 14.6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to obtain the images that are necessary to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 14.6.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 14.52. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 6.5U3 or vSphere 6.7U2 and later vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware's NSX Container Plug-in (NCP) is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 14.6.5. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 14.53. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 14.54. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 14.55. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 14.6.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 14.11. Roles and privileges required for installation vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.View vSphere vCenter Cluster Always Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone Folder.Create Folder.Delete Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 14.12. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Always True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss. Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. The VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 14.56. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 14.6.7. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 14.6.8. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 14.6.9. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network VMware vSphere environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.7 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - vSphere image. Upload the image you downloaded to a location that is accessible from the bastion server. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 14.6.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible location. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Paste the pull secret from the Red Hat OpenShift Cluster Manager . In the install-config.yaml file, set the value of platform.vsphere.clusterOSImage to the image location or name. For example: platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to provide the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry, which can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which look like this excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.example.com/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.example.com/ocp/release To complete these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 14.6.10.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 14.6.10.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 14.57. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 14.6.10.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 14.58. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 14.6.10.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 14.59. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 14.6.10.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 14.60. Additional VMware vSphere cluster parameters Parameter Description Values platform.vsphere.vCenter The fully-qualified hostname or IP address of the vCenter server. String platform.vsphere.username The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String platform.vsphere.password The password for the vCenter user name. String platform.vsphere.datacenter The name of the datacenter to use in the vCenter instance. String platform.vsphere.defaultDatastore The name of the default datastore to use for provisioning volumes. String platform.vsphere.folder Optional . The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . platform.vsphere.network The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String platform.vsphere.cluster The vCenter cluster to install the OpenShift Container Platform cluster in. String platform.vsphere.apiVIP The virtual IP (VIP) address that you configured for control plane API access. An IP address, for example 128.0.0.1 . platform.vsphere.ingressVIP The virtual IP (VIP) address that you configured for cluster ingress. An IP address, for example 128.0.0.1 . 14.6.10.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 14.61. Optional VMware vSphere machine pool parameters Parameter Description Values platform.vsphere.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . platform.vsphere.osDisk.diskSizeGB The size of the disk in gigabytes. Integer platform.vsphere.cpus The total number of virtual processor cores to assign a virtual machine. Integer platform.vsphere.coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value is 1 Integer platform.vsphere.memoryMB The size of a virtual machine's memory in megabytes. Integer 14.6.10.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-48.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 7 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 8 The cluster name that you specified in your DNS records. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. 10 The location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that is accessible from the bastion server. 11 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 12 Provide the contents of the certificate file that you used for your mirror registry. 13 Provide the imageContentSources section from the output of the command to mirror the repository. 14.6.10.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 14.6.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. + Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. + Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. + Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 14.6.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 14.6.12.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 14.6.12.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 14.6.12.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 14.6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.6.14. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Global Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 14.6.15. Creating registry storage After you install the cluster, you must create storage for the Registry Operator. 14.6.15.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 14.6.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 14.6.15.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 14.6.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 14.6.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . 14.7. Installing a cluster on vSphere in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.7, you can install a cluster on VMware vSphere infrastructure that you provision in a restricted network. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 14.7.1. Prerequisites Create a registry on your mirror host and obtain the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. Review details about the OpenShift Container Platform installation and update processes. Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall and plan to use telemetry, you must configure the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 14.7.2. About installations in restricted networks In OpenShift Container Platform 4.7, you can perform an installation that does not require an active connection to the Internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less Internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the Internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 14.7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 14.7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to obtain the images that are necessary to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 14.7.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 14.62. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 6.5U3 or vSphere 6.7U2 and later vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware's NSX Container Plug-in (NCP) is certified with OpenShift Container Platform 4.6 and NSX-T 3.x+. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 14.7.5. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. 14.7.5.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: One temporary bootstrap machine Three control plane, or master, machines At least two compute machines, which are also known as worker machines. Note The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 7.9. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . Important All virtual machines must reside in the same datastore and in the same folder as the installer. 14.7.5.2. Network connectivity requirements All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config files from the Machine Config Server. The machines are configured with static IP addresses. No DHCP server is required. Additionally, each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server. 14.7.5.3. IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. 14.7.5.4. Minimum resource requirements Each cluster machine must meet the following minimum requirements: Table 14.63. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 14.7.5.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 14.7.6. Creating the user-provisioned infrastructure Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure. Prerequisites Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Provision the required load balancers. Configure the ports for your machines. Configure DNS. Ensure network connectivity. 14.7.6.1. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. Ensure that the machines have persistent IP addresses and host names. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster. Table 14.64. All machines to all machines Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . TCP/UDP 30000 - 32767 Kubernetes node port Table 14.65. All machines to control plane Protocol Port Description TCP 6443 Kubernetes API Table 14.66. Control plane machines to control plane machines Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Network topology requirements The infrastructure that you provision for your cluster must meet the following network topology requirements. Important OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Load balancers Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 14.67. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Configure the following ports on both the front and back of the load balancers: Table 14.68. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTP traffic Tip If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. Ethernet adaptor hardware address requirements When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges: 00:05:69:00:00:00 to 00:05:69:FF:FF:FF 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF 00:50:56:00:00:00 to 00:50:56:FF:FF:FF If a MAC address outside the VMware OUI is used, the cluster installation will not succeed. NTP configuration OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 14.7.6.2. User-provisioned DNS requirements DNS is used for name resolution and reverse name resolution. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the host name for all the nodes. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 14.69. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. Add a wildcard DNS A/AAAA or CNAME record that refers to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Bootstrap bootstrap.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Master hosts <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes (also known as the master nodes). These records must be resolvable by the nodes within the cluster. Worker hosts <worker><n>.<cluster_name>.<base_domain>. Add DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Tip You can use the nslookup <hostname> command to verify name resolution. You can use the dig -x <ip_address> command to verify reverse name resolution for the PTR records. The following example of a BIND zone file shows sample A records for name resolution. The purpose of the example is to show the records that are needed. The example is not meant to provide advice for choosing one name resolution service over another. Example 14.13. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF The following example BIND zone file shows sample PTR records for reverse name resolution. Example 14.14. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is "last octet" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF 14.7.7. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster's machines. 14.7.8. Manually creating the installation configuration file For installations of OpenShift Container Platform that use user-provisioned infrastructure, you manually generate your installation configuration file. Prerequisites Obtain the OpenShift Container Platform installation program and the access token for your cluster. Obtain the imageContentSources section from the output of the command to mirror the repository. Obtain the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the following install-config.yaml file template and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Unless you use a registry that RHCOS trusts by default, such as docker.io , you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror. You must include the imageContentSources section from the output of the command to mirror the repository. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 14.7.8.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 14 fips: false 15 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 7 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 The fully-qualified hostname or IP address of the vCenter server. 10 The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. 11 The password associated with the vSphere user. 12 The vSphere datacenter. 13 The default vSphere datastore to use. 14 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 18 Provide the contents of the certificate file that you used for your mirror registry. 19 Provide the imageContentSources section from the output of the command to mirror the repository. 14.7.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 14.7.9. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. + Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. The following files are generated in the directory: 14.7.10. Configuring chrony time service You must set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create the contents of the chrony.conf file and encode it as base64. For example: USD cat << EOF | base64 pool 0.rhel.pool.ntp.org iburst 1 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony EOF 1 Specify any valid, reachable time source, such as the one provided by your DHCP server. Example output ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGli L2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAv dmFyL2xvZy9jaHJvbnkK Create the MachineConfig object file, replacing the base64 string with the one you just created. This example adds the file to master nodes. You can change it to worker or make an additional MachineConfig for the worker role. Create MachineConfig files for each type of machine that your cluster uses: USD cat << EOF > ./99-masters-chrony-configuration.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-masters-chrony-configuration spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 3.2.0 networkd: {} passwd: {} storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGliL2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAvdmFyL2xvZy9jaHJvbnkK mode: 420 1 overwrite: true path: /etc/chrony.conf osImageURL: "" EOF 1 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . Make a backup copy of the configuration files. Apply the configurations in one of two ways: If the cluster is not up yet, after you generate manifest files, add this file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-masters-chrony-configuration.yaml 14.7.11. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 14.7.12. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere Before you install a cluster that contains user-provisioned infrastructure on VMware vSphere, you must create RHCOS machines on vSphere hosts for it to use. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before booting a VM from an OVA in vSphere: USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High . Ensure that your VM's CPU and memory reservation have the following values: Memory reservation value must be equal to its configured memory size. CPU reservation value must be at least the number of low latency virtual CPUs multiplied by the measured physical CPU speed. Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the configuration and power on the VM. Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 14.7.13. Creating more Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere You can create more compute machines for your cluster that uses user-provisioned infrastructure on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . From the Latency Sensitivity list, select High . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 14.7.14. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important Kubernetes supports only two filesystem partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a MachineConfig object and add it to a file in the openshift directory. For example, name the file 98-var-partition.yaml , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The name of the mount unit must match the directory specified in the Where= directive. For example, for a filesystem mounted on /var/lib/containers , the unit must be named var-lib-containers.mount . 5 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 14.7.15. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 14.7.16. Creating the cluster To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Create the required infrastructure for the cluster. You obtained the installation program and generated the Ignition config files for your cluster. You used the Ignition config files to create RHCOS machines for your cluster. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.20.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the machine itself. 14.7.17. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.7.18. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 14.7.19. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m Configure the Operators that are not available. 14.7.19.1. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Global Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 14.7.19.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 14.7.19.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 14.7.19.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 14.7.19.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 14.7.20. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation configuration documentation for more information. All the worker nodes are restarted. To monitor the process, enter the following command: USD oc get nodes -w Note If you have additional machine types such as infrastructure nodes, repeat the process for these types. Register your cluster on the Cluster registration page. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 14.7.21. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 14.7.22. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 14.7.23. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 14.8. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure You can remove a cluster that you deployed in your VMware vSphere instance by using installer-provisioned infrastructure. Note When you run the openshift-install destroy cluster command to uninstall OpenShift Container Platform, vSphere volumes are not automatically deleted. The cluster administrator must manually find the vSphere volumes and delete them. 14.8.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites Have a copy of the installation program that you used to deploy the cluster. Have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 14.9. Using the vSphere Problem Detector Operator 14.9.1. About the vSphere Problem Detector Operator The vSphere Problem Detector Operator checks clusters that are deployed on vSphere for common installation and misconfiguration issues that are related to storage. The Operator runs in the openshift-cluster-storage-operator namespace and is started by the Cluster Storage Operator when the Cluster Storage Operator detects that the cluster is deployed on vSphere. The vSphere Problem Detector Operator communicates with the vSphere vCenter Server to determine the virtual machines in the cluster, the default datastore, and other information about the vSphere vCenter Server configuration. The Operator uses the credentials from the Cloud Credential Operator to connect to vSphere. The Operator runs the checks according to the following schedule: The checks run every 8 hours. If any check fails, the Operator runs the checks again in intervals of 1 minute, 2 minutes, 4, 8, and so on. The Operator doubles the interval up to a maximum interval of 8 hours. When all checks pass, the schedule returns to an 8 hour interval. The Operator increases the frequency of the checks after a failure so that the Operator can report success quickly after the failure condition is remedied. You can run the Operator manually for immediate troubleshooting information. 14.9.2. Running the vSphere Problem Detector Operator checks You can override the schedule for running the vSphere Problem Detector Operator checks and run the checks immediately. The vSphere Problem Detector Operator automatically runs the checks every 8 hours. However, when the Operator starts, it runs the checks immediately. The Operator is started by the Cluster Storage Operator when the Cluster Storage Operator starts and determines that the cluster is running on vSphere. To run the checks immediately, you can scale the vSphere Problem Detector Operator to 0 and back to 1 so that it restarts the vSphere Problem Detector Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Scale the Operator to 0 : USD oc scale deployment/vsphere-problem-detector-operator --replicas=0 \ -n openshift-cluster-storage-operator If the deployment does not scale to zero immediately, you can run the following command to wait for the pods to exit: USD oc wait pods -l name=vsphere-problem-detector-operator \ --for=delete --timeout=5m -n openshift-cluster-storage-operator Scale the Operator back to 1 : USD oc scale deployment/vsphere-problem-detector-operator --replicas=1 \ -n openshift-cluster-storage-operator Delete the old leader lock to speed up the new leader election for the Cluster Storage Operator: USD oc delete -n openshift-cluster-storage-operator \ cm vsphere-problem-detector-lock Verification View the events or logs that are generated by the vSphere Problem Detector Operator. Confirm that the events or logs have recent timestamps. 14.9.3. Viewing the events from the vSphere Problem Detector Operator After the vSphere Problem Detector Operator runs and performs the configuration checks, it creates events that can be viewed from the command line or from the OpenShift Container Platform web console. Procedure To view the events by using the command line, run the following command: USD oc get event -n openshift-cluster-storage-operator \ --sort-by={.metadata.creationTimestamp} Example output 16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader To view the events by using the OpenShift Container Platform web console, navigate to Home Events and select openshift-cluster-storage-operator from the Project menu. 14.9.4. Viewing the logs from the vSphere Problem Detector Operator After the vSphere Problem Detector Operator runs and performs the configuration checks, it creates log records that can be viewed from the command line or from the OpenShift Container Platform web console. Procedure To view the logs by using the command line, run the following command: USD oc logs deployment/vsphere-problem-detector-operator \ -n openshift-cluster-storage-operator Example output I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed To view the Operator logs with the OpenShift Container Platform web console, perform the following steps: Navigate to Workloads Pods . Select openshift-cluster-storage-operator from the Projects menu. Click the link for the vsphere-problem-detector-operator pod. Click the Logs tab on the Pod details page to view the logs. 14.9.5. Configuration checks run by the vSphere Problem Detector Operator The following tables identify the configuration checks that the vSphere Problem Detector Operator runs. Some checks verify the configuration of the cluster. Other checks verify the configuration of each node in the cluster. Table 14.70. Cluster configuration checks Name Description CheckDefaultDatastore Verifies that the default datastore name in the vSphere configuration is short enough for use with dynamic provisioning. If this check fails, you can expect the following: systemd logs errors to the journal such as Failed to set up mount unit: Invalid argument . systemd does not unmount volumes if the virtual machine is shut down or rebooted without draining all the pods from the node. If this check fails, reconfigure vSphere with a shorter name for the default datastore. CheckFolderPermissions Verifies the permission to list volumes in the default datastore. This permission is required to create volumes. The Operator verifies the permission by listing the / and /kubevols directories. The root directory must exist. It is acceptable if the /kubevols directory does not exist when the check runs. The /kubevols directory is created when the datastore is used with dynamic provisioning if the directory does not already exist. If this check fails, review the required permissions for the vCenter account that was specified during the OpenShift Container Platform installation. CheckStorageClasses Verifies the following: The fully qualified path to each persistent volume that is provisioned by this storage class is less than 255 characters. If a storage class uses a storage policy, the storage class must use one policy only and that policy must be defined. CheckTaskPermissions Verifies the permission to list recent tasks and datastores. ClusterInfo Collects the cluster version and UUID from vSphere vCenter. Table 14.71. Node configuration checks Name Description CheckNodeDiskUUID Verifies that all the vSphere virtual machines are configured with disk.enableUUID=TRUE . If this check fails, see the How to check 'disk.EnableUUID' parameter from VM in vSphere Red Hat Knowledgebase solution. CheckNodeProviderID Verifies that all nodes are configured with the ProviderID from vSphere vCenter. This check fails when the output from the following command does not include a provider ID for each node. USD oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID If this check fails, refer to the vSphere product documentation for information about setting the provider ID for each node in the cluster. CollectNodeESXiVersion Reports the version of the ESXi hosts that run nodes. CollectNodeHWVersion Reports the virtual machine hardware version for a node. 14.9.6. About the storage class configuration check The names for persistent volumes that use vSphere storage are related to the datastore name and cluster ID. When a persistent volume is created, systemd creates a mount unit for the persistent volume. The systemd process has a 255 character limit for the length of the fully qualified path to the VDMK file that is used for the persistent volume. The fully qualified path is based on the naming conventions for systemd and vSphere. The naming conventions use the following pattern: /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk The naming conventions require 205 characters of the 255 character limit. The datastore name and the cluster ID are determined from the deployment. The datastore name and cluster ID are substituted into the preceding pattern. Then the path is processed with the systemd-escape command to escape special characters. For example, a hyphen character uses four characters after it is escaped. The escaped value is \x2d . After processing with systemd-escape to ensure that systemd can access the fully qualified path to the VDMK file, the length of the path must be less than 255 characters. 14.9.7. Metrics for the vSphere Problem Detector Operator The vSphere Problem Detector Operator exposes the following metrics for use by the OpenShift Container Platform monitoring stack. Table 14.72. Metrics exposed by the vSphere Problem Detector Operator Name Description vsphere_cluster_check_total Cumulative number of cluster-level checks that the vSphere Problem Detector Operator performed. This count includes both successes and failures. vsphere_cluster_check_errors Number of failed cluster-level checks that the vSphere Problem Detector Operator performed. For example, a value of 1 indicates that one cluster-level check failed. vsphere_esxi_version_total Number of ESXi hosts with a specific version. Be aware that if a host runs more than one node, the host is counted only once. vsphere_node_check_total Cumulative number of node-level checks that the vSphere Problem Detector Operator performed. This count includes both successes and failures. vsphere_node_check_errors Number of failed node-level checks that the vSphere Problem Detector Operator performed. For example, a value of 1 indicates that one node-level check failed. vsphere_node_hw_version_total Number of vSphere nodes with a specific hardware version. vsphere_vcenter_info Information about the vSphere vCenter Server. 14.9.8. Additional resources Monitoring overview
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvzf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvzf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvzf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is \"last octet\" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }", "base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64", "base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64", "base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64", "export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"", "export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"", "govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "tar xvzf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.20.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "oc get nodes -w", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is \"last octet\" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }", "base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64", "base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64", "base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64", "export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"", "export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"", "govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.20.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "oc get nodes -w", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.example.com/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.example.com/ocp/release", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-48.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvzf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is \"last octet\" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "cat << EOF | base64 pool 0.rhel.pool.ntp.org iburst 1 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony EOF", "ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGli L2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAv dmFyL2xvZy9jaHJvbnkK", "cat << EOF > ./99-masters-chrony-configuration.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-masters-chrony-configuration spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 3.2.0 networkd: {} passwd: {} storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGliL2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAvdmFyL2xvZy9jaHJvbnkK mode: 420 1 overwrite: true path: /etc/chrony.conf osImageURL: \"\" EOF", "oc apply -f ./99-masters-chrony-configuration.yaml", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }", "base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64", "base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64", "base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64", "export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"", "export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"", "govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.20.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "oc get nodes -w", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "oc scale deployment/vsphere-problem-detector-operator --replicas=0 -n openshift-cluster-storage-operator", "oc wait pods -l name=vsphere-problem-detector-operator --for=delete --timeout=5m -n openshift-cluster-storage-operator", "oc scale deployment/vsphere-problem-detector-operator --replicas=1 -n openshift-cluster-storage-operator", "oc delete -n openshift-cluster-storage-operator cm vsphere-problem-detector-lock", "oc get event -n openshift-cluster-storage-operator --sort-by={.metadata.creationTimestamp}", "16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader", "oc logs deployment/vsphere-problem-detector-operator -n openshift-cluster-storage-operator", "I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed", "oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID", "/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/installing/installing-on-vsphere
Chapter 62. Chunk Component
Chapter 62. Chunk Component Available as of Camel version 2.15 The chunk: component allows for processing a message using a Chunk template. This can be ideal when using Templating to generate responses for requests. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-chunk</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 62.1. URI format chunk:templateName[?options] Where templateName is the classpath-local URI of the template to invoke. You can append query options to the URI in the following format, ?option=value&option=value&... 62.2. Options The Chunk component supports 2 options, which are listed below. Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so imposes a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this option has security ramifications. For example, if the header contains untrusted or user derived content, this can ultimately impact on the confidentility and integrity of your end application, so use this option with caution. false boolean The Chunk endpoint is configured using URI syntax: with the following path and query parameters: 62.2.1. Path Parameters (1 parameters): Name Description Default Type resourceUri Required Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 62.2.2. Query Parameters (9 parameters): Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so imposes a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this option has security ramifications. For example, if the header contains untrusted or user derived content, this can ultimately impact on the confidentility and integrity of your end application, so use this option with caution. false boolean contentCache (producer) Sets whether to use resource content cache or not false boolean encoding (producer) Define the encoding of the body String extension (producer) Define the file extension of the template String themeFolder (producer) Define the themes folder to scan String themeLayer (producer) Define the theme layer to elaborate String themeSubfolder (producer) Define the themes subfolder to scan String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 62.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.chunk.enabled Enable chunk component true Boolean camel.component.chunk.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean Chunk component will look for a specific template in themes folder with extensions .chtml or _.cxml. _If you need to specify a different folder or extensions, you will need to use the specific options listed above. 62.4. Chunk Context Camel will provide exchange information in the Chunk context (just a Map ). The Exchange is transferred as: key value exchange The Exchange itself. exchange.properties The Exchange properties. headers The headers of the In message. camelContext The Camel Context. request The In message. body The In message body. response The Out message (only for InOut message exchange pattern). 62.5. Dynamic templates Camel provides two headers by which you can define a different resource location for a template or the template content itself. If any of these headers is set then Camel uses this over the endpoint configured resource. This allows you to provide a dynamic template at runtime. Header Type Description Support Version ChunkConstants.CHUNK_RESOURCE_URI String A URI for the template resource to use instead of the endpoint configured. ChunkConstants.CHUNK_TEMPLATE String The template to use instead of the endpoint configured. 62.6. Samples For example you could use something like: from("activemq:My.Queue"). to("chunk:template"); To use a Chunk template to formulate a response for a message for InOut message exchanges (where there is a JMSReplyTo header). If you want to use InOnly and consume the message and send it to another destination you could use: from("activemq:My.Queue"). to("chunk:template"). to("activemq:Another.Queue"); It's possible to specify what template the component should use dynamically via a header, so for example: from("direct:in"). setHeader(ChunkConstants.CHUNK_RESOURCE_URI).constant("template"). to("chunk:dummy?allowTemplateFromHeader=true"); Warning Enabling the allowTemplateFromHeader option has security ramifications. For example, if the header contains untrusted or user derived content, this can ultimately impact on the confidentility and integrity of your end application, so use this option with caution. An example of Chunk component options use: from("direct:in"). to("chunk:file_example?themeFolder=template&themeSubfolder=subfolder&extension=chunk"); In this example Chunk component will look for the file file_example.chunk in the folder template/subfolder. 62.7. The Email Sample In this sample we want to use Chunk templating for an order confirmation email. The email template is laid out in Chunk as: Dear {USDheaders.lastName}, {USDheaders.firstName} Thanks for the order of {USDheaders.item}. Regards Camel Riders Bookstore {USDbody} 62.8. See Also Configuring Camel Component Endpoint Getting Started
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-chunk</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "chunk:templateName[?options]", "chunk:resourceUri", "from(\"activemq:My.Queue\"). to(\"chunk:template\");", "from(\"activemq:My.Queue\"). to(\"chunk:template\"). to(\"activemq:Another.Queue\");", "from(\"direct:in\"). setHeader(ChunkConstants.CHUNK_RESOURCE_URI).constant(\"template\"). to(\"chunk:dummy?allowTemplateFromHeader=true\");", "from(\"direct:in\"). to(\"chunk:file_example?themeFolder=template&themeSubfolder=subfolder&extension=chunk\");", "Dear {USDheaders.lastName}, {USDheaders.firstName} Thanks for the order of {USDheaders.item}. Regards Camel Riders Bookstore {USDbody}" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/chunk-component
Chapter 2. Clair concepts
Chapter 2. Clair concepts The following sections provide a conceptual overview of how Clair works. 2.1. Clair in practice A Clair analysis is broken down into three distinct parts: indexing, matching, and notification. 2.1.1. Indexing Clair's indexer service plays a crucial role in understanding the makeup of a container image. In Clair, container image representations called "manifests." Manifests are used to comprehend the contents of the image's layers. To streamline this process, Clair takes advantage of the fact that Open Container Initiative (OCI) manifests and layers are designed for content addressing, reducing repetitive tasks. During indexing, a manifest that represents a container image is taken and broken down into its essential components. The indexer's job is to uncover the image's contained packages, its origin distribution, and the package repositories it relies on. This valuable information is then recorded and stored within Clair's database. The insights gathered during indexing serve as the basis for generating a comprehensive vulnerability report. This report can be seamlessly transferred to a matcher node for further analysis and action, helping users make informed decisions about their container images' security. The IndexReport is stored in Clair's database. It can be fed to a matcher node to compute the vulnerability report. 2.1.2. Matching With Clair, a matcher node is responsible for matching vulnerabilities to a provided index report. Matchers are responsible for keeping the database of vulnerabilities up to date. Matchers run a set of updaters, which periodically probe their data sources for new content. New vulnerabilities are stored in the database when they are discovered. The matcher API is designed to always provide the most recent vulnerability report when queried. The vulnerability report summarizes both a manifest's content and any vulnerabilities affecting the content. New vulnerabilities are stored in the database when they are discovered. The matcher API is designed to be used often. It is designed to always provide the most recent VulnerabilityReport when queried. The VulnerabilityReport summarizes both a manifest's content and any vulnerabilities affecting the content. 2.1.3. Notifier service Clair uses a notifier service that keeps track of new security database updates and informs users if new or removed vulnerabilities affect an indexed manifest. When the notifier becomes aware of new vulnerabilities affecting a previously indexed manifest, it uses the configured methods in your config.yaml file to issue notifications about the new changes. Returned notifications express the most severe vulnerability discovered because of the change. This avoids creating excessive notifications for the same security database update. When a user receives a notification, it issues a new request against the matcher to receive an up to date vulnerability report. You can subscribe to notifications through the following mechanics: Webhook delivery AMQP delivery STOMP delivery Configuring the notifier is done through the Clair YAML configuration file. 2.2. Clair authentication In its current iteration, Clair v4 (Clair) handles authentication internally. Note versions of Clair used JWT Proxy to gate authentication. Authentication is configured by specifying configuration objects underneath the auth key of the configuration. Multiple authentication configurations might be present, but they are used preferentially in the following order: PSK. With this authentication configuration, Clair implements JWT-based authentication using a pre-shared key. Configuration. For example: auth: psk: key: >- MDQ4ODBlNDAtNDc0ZC00MWUxLThhMzAtOTk0MzEwMGQwYTMxCg== iss: 'issuer' In this configuration the auth field requires two parameters: iss , which is the issuer to validate all incoming requests, and key , which is a base64 coded symmetric key for validating the requests. 2.3. Clair updaters Clair uses Go packages called updaters that contain the logic of fetching and parsing different vulnerability databases. Updaters are usually paired with a matcher to interpret if, and how, any vulnerability is related to a package. Administrators might want to update the vulnerability database less frequently, or not import vulnerabilities from databases that they know will not be used. 2.4. Information about Clair updaters The following table provides details about each Clair updater, including the configuration parameter, a brief description, relevant URLs, and the associated components that they interact with. This list is not exhaustive, and some servers might issue redirects, while certain request URLs are dynamically constructed to ensure accurate vulnerability data retrieval. For Clair, each updater is responsible for fetching and parsing vulnerability data related to a specific package type or distribution. For example, the Debian updater focuses on Debian-based Linux distributions, while the AWS updater focuses on vulnerabilities specific to Amazon Web Services' Linux distributions. Understanding the package type is important for vulnerability management because different package types might have unique security concerns and require specific updates and patches. Note If you are using a proxy server in your environment with Clair's updater URLs, you must identify which URL needs to be added to the proxy allowlist to ensure that Clair can access them unimpeded. Use the following table to add updater URLs to your proxy allowlist. Table 2.1. Clair updater information Updater Description URLs Component alpine The Alpine updater is responsible for fetching and parsing vulnerability data related to packages in Alpine Linux distributions. https://secdb.alpinelinux.org/ Alpine Linux SecDB database aws The AWS updater is focused on AWS Linux-based packages, ensuring that vulnerability information specific to Amazon Web Services' custom Linux distributions is kept up-to-date. http://repo.us-west-2.amazonaws.com/2018.03/updates/x86_64/mirror.list https://cdn.amazonlinux.com/2/core/latest/x86_64/mirror.list https://cdn.amazonlinux.com/al2023/core/mirrors/latest/x86_64/mirror.list Amazon Web Services (AWS) UpdateInfo debian The Debian updater is essential for tracking vulnerabilities in packages associated with Debian-based Linux distributions. https://deb.debian.org/ https://security-tracker.debian.org/tracker/data/json Debian Security Tracker clair.cvss The Clair Common Vulnerability Scoring System (CVSS) updater focuses on maintaining data about vulnerabilities and their associated CVSS scores. This is not tied to a specific package type but rather to the severity and risk assessment of vulnerabilities in general. https://nvd.nist.gov/feeds/json/cve/1.1/ National Vulnerability Database (NVD) feed for Common Vulnerabilities and Exposures (CVE) data in JSON format oracle The Oracle updater is dedicated to Oracle Linux packages, maintaining data on vulnerabilities that affect Oracle Linux systems. https://linux.oracle.com/security/oval/com.oracle.elsa-*.xml.bz2 Oracle Oval database photon The Photon updater deals with packages in VMware Photon OS. https://packages.vmware.com/photon/photon_oval_definitions/ VMware Photon OS oval definitions rhel The Red Hat Enterprise Linux (RHEL) updater is responsible for maintaining vulnerability data for packages in Red Hat's Enterprise Linux distribution. https://access.redhat.com/security/cve/ https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST Red Hat Enterprise Linux (RHEL) Oval database rhcc The Red Hat Container Catalog (RHCC) updater is connected to Red Hat's container images. This updater ensures that vulnerability information related to Red Hat's containerized software is kept current. https://access.redhat.com/security/data/metrics/cvemap.xml Resource Handler Configuration Controller (RHCC) database suse The SUSE updater manages vulnerability information for packages in the SUSE Linux distribution family, including openSUSE, SUSE Enterprise Linux, and others. https://support.novell.com/security/oval/ SUSE Oval database ubuntu The Ubuntu updater is dedicated to tracking vulnerabilities in packages associated with Ubuntu-based Linux distributions. Ubuntu is a popular distribution in the Linux ecosystem. https://security-metadata.canonical.com/oval/com.ubuntu.*.cve.oval.xml https://api.launchpad.net/1.0/ Ubuntu Oval Database osv The Open Source Vulnerability (OSV) updater specializes in tracking vulnerabilities within open source software components. OSV is a critical resource that provides detailed information about security issues found in various open source projects. https://osv-vulnerabilities.storage.googleapis.com/ Open Source Vulnerabilities database 2.5. Configuring updaters Updaters can be configured by the updaters.sets key in your clair-config.yaml file. Important If the sets field is not populated, it defaults to using all sets. In using all sets, Clair tries to reach the URL or URLs of each updater. If you are using a proxy environment, you must add these URLs to your proxy allowlist. If updaters are being run automatically within the matcher process, which is the default setting, the period for running updaters is configured under the matcher's configuration field. 2.5.1. Selecting specific updater sets Use the following references to select one, or multiple, updaters for your Red Hat Quay deployment. Configuring Clair for multiple updaters Multiple specific updaters #... updaters: sets: - alpine - aws - osv #... Configuring Clair for Alpine Alpine config.yaml example #... updaters: sets: - alpine #... Configuring Clair for AWS AWS config.yaml example #... updaters: sets: - aws #... Configuring Clair for Debian Debian config.yaml example #... updaters: sets: - debian #... Configuring Clair for Clair CVSS Clair CVSS config.yaml example #... updaters: sets: - clair.cvss #... Configuring Clair for Oracle Oracle config.yaml example #... updaters: sets: - oracle #... Configuring Clair for Photon Photon config.yaml example #... updaters: sets: - photon #... Configuring Clair for SUSE SUSE config.yaml example #... updaters: sets: - suse #... Configuring Clair for Ubuntu Ubuntu config.yaml example #... updaters: sets: - ubuntu #... Configuring Clair for OSV OSV config.yaml example #... updaters: sets: - osv #... 2.5.2. Selecting updater sets for full Red Hat Enterprise Linux (RHEL) coverage For full coverage of vulnerabilities in Red Hat Enterprise Linux (RHEL), you must use the following updater sets: rhel . This updater ensures that you have the latest information on the vulnerabilities that affect RHEL. rhcc . This updater keeps track of vulnerabilities related to Red hat's container images. clair.cvss . This updater offers a comprehensive view of the severity and risk assessment of vulnerabilities by providing Common Vulnerabilities and Exposures (CVE) scores. osv . This updater focuses on tracking vulnerabilities in open-source software components. This updater is recommended due to how common the use of Java and Go are in RHEL products. RHEL updaters example #... updaters: sets: - rhel - rhcc - clair.cvss - osv #... 2.5.3. Advanced updater configuration In some cases, users might want to configure updaters for specific behavior, for example, if you want to allowlist specific ecosystems for the Open Source Vulnerabilities (OSV) updaters. Advanced updater configuration might be useful for proxy deployments or air gapped deployments. Configuration for specific updaters in these scenarios can be passed by putting a key underneath the config environment variable of the updaters object. Users should examine their Clair logs to double-check names. The following YAML snippets detail the various settings available to some Clair updater Important For more users, advanced updater configuration is unnecessary. Configuring the alpine updater #... updaters: sets: - apline config: alpine: url: https://secdb.alpinelinux.org/ #... Configuring the debian updater #... updaters: sets: - debian config: debian: mirror_url: https://deb.debian.org/ json_url: https://security-tracker.debian.org/tracker/data/json #... Configuring the clair.cvss updater #... updaters: config: clair.cvss: url: https://nvd.nist.gov/feeds/json/cve/1.1/ #... Configuring the oracle updater #... updaters: sets: - oracle config: oracle-2023-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2023.xml.bz2 oracle-2022-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2022.xml.bz2 #... Configuring the photon updater #... updaters: sets: - photon config: photon: url: https://packages.vmware.com/photon/photon_oval_definitions/ #... Configuring the rhel updater #... updaters: sets: - rhel config: rhel: url: https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST ignore_unpatched: true 1 #... 1 Boolean. Whether to include information about vulnerabilities that do not have corresponding patches or updates available. Configuring the rhcc updater #... updaters: sets: - rhcc config: rhcc: url: https://access.redhat.com/security/data/metrics/cvemap.xml #... Configuring the suse updater #... updaters: sets: - suse config: suse: url: https://support.novell.com/security/oval/ #... Configuring the ubuntu updater #... updaters: config: ubuntu: url: https://api.launchpad.net/1.0/ name: ubuntu force: 1 - name: focal 2 version: 20.04 3 #... 1 Used to force the inclusion of specific distribution and version details in the resulting UpdaterSet, regardless of their status in the API response. Useful when you want to ensure that particular distributions and versions are consistently included in your updater configuration. 2 Specifies the distribution name that you want to force to be included in the UpdaterSet. 3 Specifies the version of the distribution you want to force into the UpdaterSet. Configuring the osv updater #... updaters: sets: - osv config: osv: url: https://osv-vulnerabilities.storage.googleapis.com/ allowlist: 1 - npm - pypi #... 1 The list of ecosystems to allow. When left unset, all ecosystems are allowed. Must be lowercase. For a list of supported ecosystems, see the documentation for defined ecosystems . 2.5.4. Disabling the Clair Updater component In some scenarios, users might want to disable the Clair updater component. Disabling updaters is required when running Red Hat Quay in a disconnected environment. In the following example, Clair updaters are disabled: #... matcher: disable_updaters: true #... 2.6. CVE ratings from the National Vulnerability Database As of Clair v4.2, Common Vulnerability Scoring System (CVSS) enrichment data is now viewable in the Red Hat Quay UI. Additionally, Clair v4.2 adds CVSS scores from the National Vulnerability Database for detected vulnerabilities. With this change, if the vulnerability has a CVSS score that is within 2 levels of the distribution score, the Red Hat Quay UI present's the distribution's score by default. For example: This differs from the interface, which would only display the following information: 2.7. Federal Information Processing Standard (FIPS) readiness and compliance The Federal Information Processing Standard (FIPS) developed by the National Institute of Standards and Technology (NIST) is regarded as the highly regarded for securing and encrypting sensitive data, notably in highly regulated areas such as banking, healthcare, and the public sector. Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform support FIPS by providing a FIPS mode , in which the system only allows usage of specific FIPS-validated cryptographic modules like openssl . This ensures FIPS compliance. 2.7.1. Enabling FIPS compliance Use the following procedure to enable FIPS compliance on your Red Hat Quay deployment. Prerequisite If you are running a standalone deployment of Red Hat Quay, your Red Hat Enterprise Linux (RHEL) deployment is version 8 or later and FIPS-enabled. If you are deploying Red Hat Quay on OpenShift Container Platform, OpenShift Container Platform is version 4.10 or later. Your Red Hat Quay version is 3.5.0 or later. If you are using the Red Hat Quay on OpenShift Container Platform on an IBM Power or IBM Z cluster: OpenShift Container Platform version 4.14 or later is required Red Hat Quay version 3.10 or later is required You have administrative privileges for your Red Hat Quay deployment. Procedure In your Red Hat Quay config.yaml file, set the FEATURE_FIPS configuration field to true . For example: --- FEATURE_FIPS = true --- With FEATURE_FIPS set to true , Red Hat Quay runs using FIPS-compliant hash functions.
[ "auth: psk: key: >- MDQ4ODBlNDAtNDc0ZC00MWUxLThhMzAtOTk0MzEwMGQwYTMxCg== iss: 'issuer'", "# updaters: sets: - alpine - aws - osv #", "# updaters: sets: - alpine #", "# updaters: sets: - aws #", "# updaters: sets: - debian #", "# updaters: sets: - clair.cvss #", "# updaters: sets: - oracle #", "# updaters: sets: - photon #", "# updaters: sets: - suse #", "# updaters: sets: - ubuntu #", "# updaters: sets: - osv #", "# updaters: sets: - rhel - rhcc - clair.cvss - osv #", "# updaters: sets: - apline config: alpine: url: https://secdb.alpinelinux.org/ #", "# updaters: sets: - debian config: debian: mirror_url: https://deb.debian.org/ json_url: https://security-tracker.debian.org/tracker/data/json #", "# updaters: config: clair.cvss: url: https://nvd.nist.gov/feeds/json/cve/1.1/ #", "# updaters: sets: - oracle config: oracle-2023-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2023.xml.bz2 oracle-2022-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2022.xml.bz2 #", "# updaters: sets: - photon config: photon: url: https://packages.vmware.com/photon/photon_oval_definitions/ #", "# updaters: sets: - rhel config: rhel: url: https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST ignore_unpatched: true 1 #", "# updaters: sets: - rhcc config: rhcc: url: https://access.redhat.com/security/data/metrics/cvemap.xml #", "# updaters: sets: - suse config: suse: url: https://support.novell.com/security/oval/ #", "# updaters: config: ubuntu: url: https://api.launchpad.net/1.0/ name: ubuntu force: 1 - name: focal 2 version: 20.04 3 #", "# updaters: sets: - osv config: osv: url: https://osv-vulnerabilities.storage.googleapis.com/ allowlist: 1 - npm - pypi #", "# matcher: disable_updaters: true #", "--- FEATURE_FIPS = true ---" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-concepts
Chapter 2. Calculating effective usage with cost models
Chapter 2. Calculating effective usage with cost models Cloud providers charge for the infrastructure costs of running a cluster, regardless of your overall usage. By calculating the effective usage in cost management, you can more accurately correlate cloud costs with a pod or namespace by considering their direct utilization. A pod typically requests resources, such as CPU or memory, from a cluster. The cluster then reserves these requested resources as a minimum, but the pod might use more or less than it initially requested. The effective usage metric in cost management uses whichever kind of usage, CPU or memory, is greater per hour. You can create a cost model in cost management to estimate your effective usage. Ultimately, you can use this data to understand how infrastructure cost is distributed to your OpenShift project. Prerequisites You must be a user with Cost Administrator or Cost Price List Administrator permissions. To learn how to configure user roles, see Limiting access to cost management resources in Getting started with cost management You must add your OpenShift cluster as a cost management data integration. For more details, see Integrating OpenShift Container Platform data into cost management in Getting started with cost management . Procedure Log in to Red Hat Hybrid Cloud Console . From the Services menu, click Spend Management Cost Management . In the Global Navigation, click Cost Management Settings . In the Cost Models tab, click Create cost model to open the cost model wizard. Enter a name and description for the cost model and select OpenShift Container Platform as the integration type. Click . Create a price list so that you can assign rates to usage or requests. The cost management service collects these metrics from OpenShift but there is no cost attached to them in cost management until you apply a cost model. To create a price list to calculate effective CPU usage, click Create rate . Add a description. In this example, enter effective cpu usage . In the Metric field, select CPU . In the Measurement field, select Effective-usage (core-hours) . In the Rate field, enter the rate you pay for CPU usage. In this example, enter 2 . Click Create rate . To create a price list to calculate effective memory usage, click Create rate . Add a description. In this example, enter effective memory usage . In the Metric field, select Memory . In the Measurement field, select Effective-usage (GiB-hours) . In the Rate field, enter the rate you pay for memory usage. In this example, enter 1 . Click Create rate . Click . (Optional) On the Cost calculations page, apply a markup or discount to change how raw costs are calculated for your integrations. Adding a markup to your raw costs can allow you to account for your overhead costs, such as the cost of administering your AWS account, Azure subscription, or other support costs. A markup is an estimation to cover your costs not shown by metrics or usage. On the Cost distribution page, select the CPU or Memory distribution type. The distribution type distributes costs based on CPU or memory metrics in project cost breakdowns. If your cluster has high memory usage, select Memory . If your cluster has high CPU usage, select CPU . Click . Assign an integration to your cost model and then click . Review the details and then click Create . To review the results of your cost model on a integration, in the Global Navigation, click Cost Management OpenShift . Select a project and view the results.
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/analyzing_your_cost_data/calculating-effective-cost-max
Chapter 11. Red Hat Fuse and Red Hat Process Automation Manager
Chapter 11. Red Hat Fuse and Red Hat Process Automation Manager Red Hat Fuse is a distributed, cloud-native integration platform that is part of an agile integration solution. Its distributed approach enables teams to deploy integrated services where required. Fuse has the flexibility to service diverse users, including integration experts, application developers, and business users, each with their own choice of deployment, architecture, and tooling. The API-centric, container-based architecture decouples services so they can be created, extended, and deployed independently. The result is an integration solution that supports collaboration across the enterprise. Red Hat Process Automation Manager is the Red Hat platform for creating business automation applications and microservices. It enables enterprise business and IT users to document, simulate, manage, automate, and monitor business processes and policies. Red Hat Process Automation Manager is designed to empower business and IT users to collaborate more effectively, so business applications can be changed easily and quickly. You can install Red Hat Fuse on the Apache Karaf container platform and then install and configure Red Hat Process Automation Manager in that container. You can also install Red Hat Fuse on a separate instance of Red Hat JBoss Enterprise Application Platform and integrate it with Red Hat Process Automation Manager. The kie-camel module provides integration between Red Hat Fuse and Red Hat Process Automation Manager. Important For the version of Red Hat Fuse that Red Hat Process Automation Manager 7.13 supports, see Red Hat Process Automation Manager 7 Supported Configurations . Note You can install Red Hat Fuse on Spring Boot. Red Hat Process Automation Manager provides no special integration for this scenario. You can use the kie-server-client library in an application running on Red Hat Fuse on Spring Boot to enable communication with Red Hat Process Automation Manager services running on a KIE Server. For instructions about using the kie-server-client library, see Interacting with Red Hat Process Automation Manager using KIE APIs .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/fuse-con
Chapter 3. Internal storage services
Chapter 3. Internal storage services Red Hat OpenShift Data Foundation service is available for consumption internally to the Red Hat OpenShift Container Platform that runs on the following infrastructure: Amazon Web Services (AWS) Bare metal VMware vSphere Microsoft Azure Google Cloud Red Hat OpenStack 13 or higher (installer-provisioned infrastructure) [Technology Preview] IBM Power IBM Z and IBM(R) LinuxONE ROSA with hosted control planes (HCP) [Technology Preview] Creation of an internal cluster resource results in the internal provisioning of the OpenShift Data Foundation base services, and makes additional storage classes available to the applications.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/planning_your_deployment/internal-storage-services_rhodf
Chapter 3. Additional malware service concepts
Chapter 3. Additional malware service concepts The following additional information might be useful in using malware detection service. 3.1. System scan At release, Malware detection administrators must initiate the Insights for Red Hat Enterprise Linux malware detection service collector scan on demand. Alternatively, administrators can run the collector command as a playbook or by using another automation method. Note The recommended frequency of scanning is up to your security team; however, because the scan can take significant time to run, the Insights for Red Hat Enterprise Linux malware detection service team recommends running the malware detection scan weekly. 3.1.1. Initiating a malware detection scan Perform the following procedure to run a malware detection scan. After the scan is complete, data are reported in the Insights for Red Hat Enterprise Linux malware detection service. The scan time depends on a number of factors, including configuration options, number of running processes, etc. Prerequisites Running the Insights client command requires sudo access on the system. Procedure Run USD sudo insights-client --collector malware-detection . View results at Security > Malware . Note You can configure a cron job to run malware detection scans at scheduled intervals. For more information, refer to Setting up recurring scans for Insights services. 3.1.2. Setting up recurring scans for Insights services To get the most accurate recommendations from Red Hat Insights services such as compliance and malware detection, you might need to manually scan and upload data collection reports to the services on a regular schedule. For more information about scheduling see the following: Setting up recurring scans for Insights services 3.2. Disabling malware signatures There might be certain malware signatures that are not of interest to you. This might be due to an intentional configuration, test scan, or a high-noise situation wherein the malware detection service reports matches that are not applicable to your security priorities. For example, the signatures XFTI_EICAR_AV_Test and XFTI_WICAR_Javascript_Test are used to detect the EICAR Anti Malware Testfile and WICAR Javascript Crypto Miner test malware. They are intentional test signatures but do not represent actual malware threats. Signatures such as these can be disabled so that matches against them are not reported in the Red Hat Hybrid Cloud Console. Once a signature is disabled, the malware detection service removes any existing matches against that signature from the Hybrid Cloud Console and ignores the signature in future scans. If the signature is re-enabled, the malware detection service again looks for the signature in future malware-detection scans and shows resulting matches. Note Disabling a signature does not erase the history of matches for that signature. Prerequisites You are a member of a Hybrid Cloud Console User Access group with the Malware detection administrator role. Only users with this role can disable and re-enable signatures. Procedure to disable a signature Navigate to Security > Malware > Signatures . Find the signature to disable. Click the options icon (...) at the end of the signature row and select Disable signature from malware analysis . Alternate procedure to disable a signature You can also disable the signature from the signature information page. Navigate to Security > Malware > Signatures . Find the signature to disable. Click the signature name. On the signature details page, click the Actions dropdown and select Disable signature from malware analysis . Disabling several signatures at the same time You can disable several signatures at the same time by checking the box at the start of each signature row, then clicking the options icon (...) to the filter fields and selecting Disable signatures from malware analysis . Viewing disabled malware signatures All users can view disabled malware signatures. Navigate to Security > Malware > Signatures . View the number of disabled malware signatures in the dashboard at the top of the page. Set filters to show the disabled signatures. Set the primary filter to Signatures included in malware analysis . Set the secondary filter to Disabled signatures . Re-enabling malware signatures Follow the same procedures as before to re-enable previously disabled signatures. 3.3. Interpreting malware detection service results In most cases, running a malware detection scan with YARA results in no signature matches. This means that YARA did not find any matching strings or boolean expressions when comparing a known set of malware signatures to the files included in the scan. The malware detection service will send these results to Red Hat Insights. You can see the details of the system scan and lack of matches in the Insights for Red Hat Enterprise Linux malware detection service UI. In the case that the malware detection scan with YARA does detect a match, it sends the results of that match to Red Hat Insights. You can see details of the match in the malware detection service UI, including the file and date. System scan and signature match history is displayed for the last 14 days, so you can detect patterns and provide information to your security incident response team. For example, if a signature match was found in one scan, but not found in the scan of the same system, that can indicate the presence of malware that is detectable only when a certain process is running. 3.3.1. Acknowledging and managing malware matches You can acknowledge malware signatures at both the system and signature levels. This allows you to remove irrelevant messages and information from your environment and efficiently review the status of malware results. The Status field on the Signatures page allows you to select a status for each system or signature that you review. You can change the status of each signature match as you continue investigating and managing malware matches. This helps your system users to stay informed about the progress of remediations or evaluations of malware matches. You can also decide which matches are irrelevant or which pose low or no threats to your systems. If you have Malware Detection Administrator permissions, you can delete irrelevant matches from your systems. The Total Matches column on the Signatures page includes all matches for a signature on a system. You can use the list of matches to track and review the history of malware matches on individual systems in your environment. Insights retains malware matches indefinitely, unless you delete them. Acknowledging malware matches and setting status also works as a form of historical record-keeping. Note that if you delete a system from the malware service, the match records are discarded. The New Matches column shows the number of new matches for a signature. A bell icon indicates each new match. A new match has a match date of up to 30 days from when the match was detected, and has a Not Reviewed status. Matches older than 30 days, or those that have already been reviewed, become part of Total Matches . 3.3.2. Acknowledging malware signature matches Prerequisites To view and filter malware matches, you need a Malware Read-only role. To edit or delete matches, you must have the Malware Detection Administrator role. Procedure Navigate to Security > Malware > Signatures . The list of signatures appears at the bottom of the page. Click on a signature name. The information page for that signature displays. The page shows the list of systems affected by that malware signature. A bell icon indicates new matches for that signature. Use the filters at the top of the list of affected systems to filter by Status . (The default filter is Name .) Click the drop-down menu to the right of the Status filter and select Not Reviewed . Click the drop-down arrow to the name of an affected system. The list of matches displays, with the most recent matches first. Select the checkbox to the match that you want to review. To change the status of a match, select the new status from the Match status drop-down menu. Select from the following options: Not reviewed In review On-hold Benign Malware detection test No action Resolved Optional . Add a note in the Note field to include more information about the match status. The green checkmark indicates that the note has been saved. Additional resources For more information about disabling malware signatures, see Disabling malware signatures . For more information about the User Access settings required to view, edit, and delete matches, see User Access settings in the Red Hat Hybrid Cloud Console . 3.3.3. Deleting a single match Prerequisites To edit or delete matches, you must have the Malware Administrator role. Procedure Navigate to Security > Malware > Signatures . The list of signatures appears at the bottom of the page. Click the drop-down arrow to the signature you want to manage. A list of matches appears below the system, with the most recent match first. Click the options icon (...) at the far right side of the match you want to delete, and then select Delete match . The list of matches refreshes. 3.3.4. Viewing malware matches on systems Prerequisites To view and filter malware matches, you need a Malware Read-only role. To edit or delete matches, you must have the Malware Administrator status. Procedure Only systems that have malware detection enabled appear in the list of affected systems. For more information about how to enable malware detection, see Get started using the Insights for RHEL malware service . Navigate to Security > Malware > Systems . The list of systems displays. If a system has malware matches, the Matched label appears to the system name. Click on a system name. The system details page displays, with the list of matched malware signatures at the bottom. Click the drop-down to a malware signature. The list of matches for the signature on the system displays. Acknowledge the matches in the list. For more information, see Additional malware service concepts . 3.4. Additional configuration options for the malware detection collector The /etc/insights-client/malware-detection-config.yml file includes several configuration options. Configuration options filesystem_scan_only This is essentially an allowlist option whereby you specify which files/directories to scan. ONLY the items specified will be scanned. It can be a single item, or a list of items (adhering to yaml syntax for specifying lists of items). If this option is empty, it essentially means scan all files/directories (depending on other options). filesystem_scan_exclude This is essentially a denylist option whereby you specify which files/directories NOT to scan. A number of directories are already listed meaning they will be excluded by default. These include virtual filesystem directories, eg /proc, /sys, /cgroup; directories that might have external mounted filesystems, eg /mnt and /media; and some other directories recommended to not be scanned, eg /dev and /var/log/insights-client (to prevent false positives). You are free to modify the list to add (or subtract) files/directories. Note that if the same item is specified both in filesystem_scan_only and filesystem_scan_exclude, eg /home, then filesystem_scan_exclude will 'win'. That is, /home will not be scanned. Another example, it's possible to filesysem_scan_only a parent directory, eg /var and then filesystem_scan_exclude certain directories within that, eg /var/lib and /var/log/insights-client. Then everything in /var except for /var/lib and /var/log/insights-client will be scanned. filesystem_scan_since Only scan files that have been modified 'since', where since can be an integer representing days ago or 'last' meaning since last filesystem scan. For example, filesystem_scan_since: 1 means only scan files that have been created or modified since 1 day ago (within the last day); filesystem_scan_since: 7 means only scan files that have been created/modified since 7 days ago (within the last week); and filesystem_scan_since: last means only scan files that have been created/modified since the last successful filesystem_scan of the malware-client. exclude_network_filesystem_mountpoints and network_filesystem_types Setting exclude_network_filesystem_mountpoints: true means that the malware detection collector will not scan mountpoints of mounted network filesystems. This is the default setting and is to prevent scanning external filesystems, resulting in unnecessary and increased network traffic and slower scanning. The filesystems it considers to be network filesystems are listed in the network_filesystem_types option. So any filesystem types that are in that list and that are mounted will be excluded from scanning. These mountpoints are essentially added to the list of excluded directories from the filesystem_scan_exclude option. If you set exclude_network_filesystem_mountpoints: false you can still exclude mountpoints with the filesystem_scan_exclude option. network_filesystem_types Define network filesystem types. scan_processes Note Scan_process is disabled by default to prevent an impact on system performance when scanning numerous or large processes. When the status is false, no processes are scanned and the processes_scan options that follow are ignored. + Include running processes in the scan. processes_scan_only This is similar to filesystem_scan_only but applies to processes. Processes may be specified as a single PID, eg 123, or a range of PIDs, eg 1000..2000, or by process name, eg Chrome. For example, the following values: 123, 1000..2000, and Chrome, would mean that PID 123, PIDs from 1000 to 2000 inclusive and PIDs for process names containing the string 'chrome' would ONLY be scanned. processes_scan_exclude This is similar to filesystem_scan_exclude but applies to processes. Like processes_scan_only, processes may be specified as a single PID, a range of PIDs, or by process name. If a process appears in both processes_scan_only and processes_scan_exclude, then processes_scan_exclude will 'win' and the process will be excluded. processes_scan_since This is similar to filesystem_scan_since but applies to processes. Only scan processes that have been started 'since', where since can be an integer representing days ago or 'last' meaning since the last successful processes scan of the malware-client. Environment variables All of the options in the /etc/insights-client/malware-detection-config.yml file can also be set using environment variables. Using the environment variable overrides the value of the same option in the configuration file. The environment variable has the same name as the configuration file option, but is uppercase. For example, the configuration file option test_scan is the environment variable TEST_SCAN . For the FILESYSTEM_SCAN_ONLY , FILESYSTEM_SCAN_EXCLUDE , PROCESSES_SCAN_ONLY , PROCESSES_SCAN_EXCLUDE , and NETWORK_FILESYSTEM_TYPES environment variables, use a list of comma separated values. For example, to scan only directories /etc , /tmp and /var/lib , use the following environment variable: To specify this on the command line (along with disabling test scan), use the following: Resources For more information about the Insights client, see Client Configuration Guide for Red Hat Insights with FedRAMP . 3.5. Enabling notifications and integrations for malware events You can enable the notifications service on Red Hat Hybrid Cloud Console to send notifications whenever the malware service detects a signature match on at least one system scan and generates an alert. Using the notifications service frees you from having to continually check the Red Hat Insights for Red Hat Enterprise Linux dashboard for alerts. For example, you can configure the notifications service to automatically send an email message whenever the malware service detects a possible threat to your systems, or to send an email digest of all the alerts that the malware service generates each day. In addition to sending email messages, you can configure the notifications service to send event data in other ways: Using an authenticated client to query Red Hat Insights APIs for event data Using webhooks to send events to third-party applications that accept inbound requests Integrating notifications with applications such as Splunk to route malware events to the application dashboard Malware service notifications include the following information: name of the affected system how many signature matches were found during the system scan a link to view the details on Red Hat Hybrid Cloud Console Enabling the notifications service requires three main steps: First, an Organization administrator creates a User access group with the Notifications administrator role, and then adds account members to the group. , a Notifications administrator sets up behavior groups for events in the notifications service. Behavior groups specify the delivery method for each notification. For example, a behavior group can specify whether email notifications are sent to all users, or just to Organization administrators. Finally, users who receive email notifications from events must set their user preferences so that they receive individual emails for each event. Additional resources For more information about how to set up notifications for malware alerts, see Configuring notifications on the Red Hat Hybrid Cloud Console with FedRAMP .
[ "FILESYSTEM_SCAN_ONLY=/etc,/tmp,/var/lib", "sudo FILESYSTEM_SCAN_ONLY=/etc,/tmp,/var/lib TEST_SCAN=false insights-client --collector malware-detection" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_reporting_malware_signatures_on_rhel_systems_with_fedramp/malware-svc-additional-concepts_malware-svc-getting-started
Chapter 10. Ceph File System mirrors
Chapter 10. Ceph File System mirrors As a storage administrator, you can replicate a Ceph File System (CephFS) to a remote Ceph File System on another Red Hat Ceph Storage cluster. Prerequisites The source and the target storage clusters must be running Red Hat Ceph Storage 6.0 or later. 10.1. Ceph File System mirroring The Ceph File System (CephFS) supports asynchronous replication of snapshots to a remote CephFS on another Red Hat Ceph Storage cluster. Snapshot synchronization copies snapshot data to a remote Ceph File System, and creates a new snapshot on the remote target with the same name. You can configure specific directories for snapshot synchronization. Management of CephFS mirrors is done by the CephFS mirroring daemon ( cephfs-mirror ). This snapshot data is synchronized by doing a bulk copy to the remote CephFS. The chosen order of synchronizing snapshot pairs is based on the creation using the snap-id . Important Synchronizing hard links is not supported. Hard linked files get synchronized as regular files. The CephFS mirroring includes features, for example snapshot incarnation or high availability. These can be managed through Ceph Manager mirroring module, which is the recommended control interface. Ceph Manager Module and interfaces The Ceph Manager mirroring module is disabled by default. It provides interfaces for managing mirroring of directory snapshots. Ceph Manager interfaces are mostly wrappers around monitor commands for managing CephFS mirroring. They are the recommended control interface. The Ceph Manager mirroring module is implemented as a Ceph Manager plugin. It is responsible for assigning directories to the cephfs-mirror daemons for synchronization. The Ceph Manager mirroring module also provides a family of commands to control mirroring of directory snapshots. The mirroring module does not manage the cephfs-mirror daemons. The stopping, starting, restarting, and enabling of the cephfs-mirror daemons is controlled by systemctl , but managed by cephadm . Note Mirroring module commands use the fs snapshot mirror prefix as compared to the monitor commands with the fs mirror prefix. Assure that you are using the module command prefix to control the mirroring of directory snapshots. Snapshot incarnation A snapshot might be deleted and recreated with the same name and different content. The user could synchronize an "old" snapshot earlier and recreate the snapshot when the mirroring was disabled. Using snapshot names to infer the point-of-continuation would result in the "new" snapshot, an incarnation, never getting picked up for synchronization. Snapshots on the secondary file system store the snap-id of the snapshot it was synchronized from. This metadata is stored in the SnapInfo structure on the Ceph Metadata Server. High availability You can deploy multiple cephfs-mirror daemons on two or more nodes to achieve concurrency in synchronization of directory snapshots. When cephfs-mirror daemons are deployed or terminated, the Ceph Manager mirroring module discovers the modified set of cephfs-mirror daemons and rebalances the directory assignment amongst the new set thus providing high availability. cephfs-mirror daemons share the synchronization load using a simple M/N policy, where M is the number of directories and N is the number of cephfs-mirror daemons. Re-addition of Ceph File System mirror peers When re-adding or reassigning a peer to a CephFS in another cluster, ensure that all mirror daemons have stopped synchronization to the peer. You can verify this with the fs mirror status command. The Peer UUID should not show up in the command output. Purge synchronized directories from the peer before re-adding it to another CephFS, especially those directories which might exist in the new primary file system. This is not required if you are re-adding a peer to the same primary file system it was earlier synchronized from. Additional Resources See Viewing the mirror status for a Ceph File System for more details on the fs mirror status command. 10.2. Configuring a snapshot mirror for a Ceph File System You can configure a Ceph File System (CephFS) for mirroring to replicate snapshots to another CephFS on a remote Red Hat Ceph Storage cluster. Note The time taken for synchronizing to a remote storage cluster depends on the file size and the total number of files in the mirroring path. Prerequisites The source and the target storage clusters must be healthy and running Red Hat Ceph Storage 6.0 or later. Root-level access to a Ceph Monitor node in the source and the target storage clusters. At least one Ceph File System deployed on your storage cluster. Procedure Log into the Cephadm shell: Example On the source storage cluster, deploy the CephFS mirroring daemon: Syntax Example This command creates a Ceph user called, cephfs-mirror , and deploys the cephfs-mirror daemon on the given node. Optional: Deploy multiple CephFS mirroring daemons and achieve high availability: Syntax Example This example deploys three cephfs-mirror daemons on different hosts. Warning Do not separate the hosts with commas as it results in the following error: On the target storage cluster, create a user for each CephFS peer: Syntax Example On the source storage cluster, enable the CephFS mirroring module: Example On the source storage cluster, enable mirroring on a Ceph File System: Syntax Example Optional: Disable snapshot mirroring: Syntax Example Warning Disabling snapshot mirroring on a file system removes the configured peers. You have to import the peers again by bootstrapping them. Prepare the target peer storage cluster. On a target node, enable the mirroring Ceph Manager module: Example On the same target node, create the peer bootstrap: Syntax The SITE_NAME is a user-defined string to identify the target storage cluster. Example Copy the token string between the double quotes for use in the step. On the source storage cluster, import the bootstrap token from the target storage cluster: Syntax Example On the source storage cluster, list the CephFS mirror peers: Syntax Example Optional: Remove a snapshot peer: Syntax Example Note See Viewing the mirror status for a Ceph File System on how to find the peer UUID value. On the source storage cluster, configure a directory for snapshot mirroring: Syntax Example Important Only absolute paths inside the Ceph File System are valid. Note The Ceph Manager mirroring module normalizes the path. For example, the /d1/d2/../dN directories are equivalent to /d1/d2 . Once a directory has been added for mirroring, its ancestor directories and subdirectories are prevented from being added for mirroring. Optional: Stop snapshot mirroring for a directory: Syntax Example Additional Resources See the Viewing the mirror status for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more information. See the Ceph File System mirroring section in the Red Hat Ceph Storage File System Guide for more information. 10.3. Viewing the mirror status for a Ceph File System The Ceph File System (CephFS) mirror daemon ( cephfs-mirror ) gets asynchronous notifications about changes in the CephFS mirroring status, along with peer updates. The CephFS mirroring module provides a mirror daemon status interface to check mirror daemon status. For more detailed information, you can query the cephfs-mirror admin socket with commands to retrieve the mirror status and peer status. Prerequisites A running Red Hat Ceph Storage cluster. At least one deployment of a Ceph File System with mirroring enabled. Root-level access to the node running the CephFS mirroring daemon. Procedure Log into the Cephadm shell: Example Check the cephfs-mirror daemon status: Syntax Example For more detailed information, use the admin socket interface as detailed below. Find the Ceph File System ID on the node running the CephFS mirroring daemon: Syntax Example The Ceph File System ID in this example is cephfs@11 . Note When mirroring is disabled, the respective fs mirror status command for the file system does not show up in the help command. View the mirror status: Syntax Example 1 This is the unique peer UUID. View the peer status: Syntax Example The state can be one of these three values: 1 idle means the directory is currently not being synchronized. 2 syncing means the directory is currently being synchronized. 3 failed means the directory has hit the upper limit of consecutive failures. The default number of consecutive failures is 10, and the default retry interval is 60 seconds. Display the directory to which cephfs-mirror daemon is mapped: Syntax Example 1 instance_id is the RADOS instance-ID associated with a cephfs-mirror daemon. Example 1 stalled state means the CephFS mirroring is stalled. The second example shows the command output when no mirror daemons are running. Additional Resources See the Ceph File System mirrors section in the Red Hat Ceph Storage File System Guide for more information. Additional Resources For details, see the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide . For details, see the Red Hat Ceph Storage Installation Guide . For details, see the The Ceph File System Metadata Server section in the Red Hat Ceph Storage File System Guide . For details, see the Ceph File System mirrors section in the Red Hat Ceph Storage File System Guide .
[ "cephadm shell", "ceph orch apply cephfs-mirror [\" NODE_NAME \"]", "ceph orch apply cephfs-mirror \"node1.example.com\" Scheduled cephfs-mirror update", "ceph orch apply cephfs-mirror --placement=\" PLACEMENT_SPECIFICATION \"", "ceph orch apply cephfs-mirror --placement=\"3 host1 host2 host3\" Scheduled cephfs-mirror update", "Error EINVAL: name component must include only a-z, 0-9, and -", "ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME / rwps", "ceph fs authorize cephfs client.mirror_remote / rwps [client.mirror_remote] key = AQCjZ5Jg739AAxAAxduIKoTZbiFJ0lgose8luQ==", "ceph mgr module enable mirroring", "ceph fs snapshot mirror enable FILE_SYSTEM_NAME", "ceph fs snapshot mirror enable cephfs", "ceph fs snapshot mirror disable FILE_SYSTEM_NAME", "ceph fs snapshot mirror disable cephfs", "ceph mgr module enable mirroring", "ceph fs snapshot mirror peer_bootstrap create FILE_SYSTEM_NAME CLIENT_NAME SITE_NAME", "ceph fs snapshot mirror peer_bootstrap create cephfs client.mirror_remote remote-site {\"token\": \"eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==\"}", "ceph fs snapshot mirror peer_bootstrap import FILE_SYSTEM_NAME TOKEN", "ceph fs snapshot mirror peer_bootstrap import cephfs eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==", "ceph fs snapshot mirror peer_list FILE_SYSTEM_NAME", "ceph fs snapshot mirror peer_list cephfs {\"e5ecb883-097d-492d-b026-a585d1d7da79\": {\"client_name\": \"client.mirror_remote\", \"site_name\": \"remote-site\", \"fs_name\": \"cephfs\", \"mon_host\": \"[v2:10.0.211.54:3300/0,v1:10.0.211.54:6789/0] [v2:10.0.210.56:3300/0,v1:10.0.210.56:6789/0] [v2:10.0.210.65:3300/0,v1:10.0.210.65:6789/0]\"}}", "ceph fs snapshot mirror peer_remove FILE_SYSTEM_NAME PEER_UUID", "ceph fs snapshot mirror peer_remove cephfs e5ecb883-097d-492d-b026-a585d1d7da79", "ceph fs snapshot mirror add FILE_SYSTEM_NAME PATH", "ceph fs snapshot mirror add cephfs /volumes/_nogroup/subvol_1", "ceph fs snapshot mirror remove FILE_SYSTEM_NAME PATH", "ceph fs snapshot mirror remove cephfs /home/user1", "cephadm shell", "ceph fs snapshot mirror daemon status", "ceph fs snapshot mirror daemon status [ { \"daemon_id\": 15594, \"filesystems\": [ { \"filesystem_id\": 1, \"name\": \"cephfs\", \"directory_count\": 1, \"peers\": [ { \"uuid\": \"e5ecb883-097d-492d-b026-a585d1d7da79\", \"remote\": { \"client_name\": \"client.mirror_remote\", \"cluster_name\": \"remote-site\", \"fs_name\": \"cephfs\" }, \"stats\": { \"failure_count\": 1, \"recovery_count\": 0 } } ] } ] } ]", "ceph --admin-daemon PATH_TO_THE_ASOK_FILE help", "ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok help { \"fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e\": \"get peer mirror status\", \"fs mirror status cephfs@11\": \"get filesystem mirror status\", }", "ceph --admin-daemon PATH_TO_THE_ASOK_FILE fs mirror status FILE_SYSTEM_NAME @_FILE_SYSTEM_ID", "ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok fs mirror status cephfs@11 { \"rados_inst\": \"192.168.0.5:0/1476644347\", \"peers\": { \"1011435c-9e30-4db6-b720-5bf482006e0e\": { 1 \"remote\": { \"client_name\": \"client.mirror_remote\", \"cluster_name\": \"remote-site\", \"fs_name\": \"cephfs\" } } }, \"snap_dirs\": { \"dir_count\": 1 } }", "ceph --admin-daemon PATH_TO_ADMIN_SOCKET fs mirror status FILE_SYSTEM_NAME @ FILE_SYSTEM_ID PEER_UUID", "ceph --admin-daemon /var/run/ceph/cephfs-mirror.asok fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e { \"/home/user1\": { \"state\": \"idle\", 1 \"last_synced_snap\": { \"id\": 120, \"name\": \"snap1\", \"sync_duration\": 0.079997898999999997, \"sync_time_stamp\": \"274900.558797s\" }, \"snaps_synced\": 2, 2 \"snaps_deleted\": 0, 3 \"snaps_renamed\": 0 } }", "ceph fs snapshot mirror dirmap FILE_SYSTEM_NAME PATH", "ceph fs snapshot mirror dirmap cephfs /volumes/_nogroup/subvol_1 { \"instance_id\": \"25184\", 1 \"last_shuffled\": 1661162007.012663, \"state\": \"mapped\" }", "ceph fs snapshot mirror dirmap cephfs /volumes/_nogroup/subvol_1 { \"reason\": \"no mirror daemons running\", \"state\": \"stalled\" 1 }" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/file_system_guide/ceph-file-system-mirrors
Chapter 3. Important update on odo
Chapter 3. Important update on odo Red Hat does not provide information about odo on the Red Hat OpenShift Service on AWS documentation site. See the documentation maintained by Red Hat and the upstream community for documentation information related to odo . Important For the materials maintained by the upstream community, Red Hat provides support under Cooperative Community Support .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/cli_tools/developer-cli-odo
Chapter 4. Stretch clusters for Ceph storage
Chapter 4. Stretch clusters for Ceph storage As a storage administrator, you can configure stretch clusters by entering stretch mode with 2-site clusters. Red Hat Ceph Storage is capable of withstanding the loss of Ceph OSDs because of its network and cluster, which are equally reliable with failures randomly distributed across the CRUSH map. If a number of OSDs is shut down, the remaining OSDs and monitors still manage to operate. However, this might not be the best solution for some stretched cluster configurations where a significant part of the Ceph cluster can use only a single network component. The example is a single cluster located in multiple data centers, for which the user wants to sustain a loss of a full data center. The standard configuration is with two data centers. Other configurations are in clouds or availability zones. Each site holds two copies of the data, therefore, the replication size is four. The third site should have a tiebreaker monitor, this can be a virtual machine or high-latency compared to the main sites. This monitor chooses one of the sites to restore data if the network connection fails and both data centers remain active. Note The standard Ceph configuration survives many failures of the network or data centers and it never compromises data consistency. If you restore enough Ceph servers following a failure, it recovers. Ceph maintains availability if you lose a data center, but can still form a quorum of monitors and have all the data available with enough copies to satisfy pools' min_size , or CRUSH rules that replicate again to meet the size. Note There are no additional steps to power down a stretch cluster. You can see the Powering down and rebooting Red Hat Ceph Storage cluster for more information. Stretch cluster failures Red Hat Ceph Storage never compromises on data integrity and consistency. If there is a network failure or a loss of nodes and the services can still be restored, Ceph returns to normal functionality on its own. However, there are situations where you lose data availability even if you have enough servers available to meet Ceph's consistency and sizing constraints, or where you unexpectedly do not meet the constraints. First important type of failure is caused by inconsistent networks. If there is a network split, Ceph might be unable to mark OSD as down to remove it from the acting placement group (PG) sets despite the primary OSD being unable to replicate data. When this happens, the I/O is not permitted because Ceph cannot meet its durability guarantees. The second important category of failures is when it appears that you have data replicated across data enters, but the constraints are not sufficient to guarantee this. For example, you might have data centers A and B, and the CRUSH rule targets three copies and places a copy in each data center with a min_size of 2 . The PG might go active with two copies in site A and no copies in site B, which means that if you lose site A, you lose the data and Ceph cannot operate on it. This situation is difficult to avoid with standard CRUSH rules. 4.1. Stretch mode for a storage cluster To configure stretch clusters, you must enter the stretch mode. When stretch mode is enabled, the Ceph OSDs only take PGs as active when they peer across data centers, or whichever other CRUSH bucket type you specified, assuming both are active. Pools increase in size from the default three to four, with two copies on each site. In stretch mode, Ceph OSDs are only allowed to connect to monitors within the same data center. New monitors are not allowed to join the cluster without specified location. If all the OSDs and monitors from a data center become inaccessible at once, the surviving data center will enter a degraded stretch mode. This issues a warning, reduces the min_size to 1 , and allows the cluster to reach an active state with the data from the remaining site. Note The degraded state also triggers warnings that the pools are too small, because the pool size does not get changed. However, a special stretch mode flag prevents the OSDs from creating extra copies in the remaining data center, therefore it still keeps 2 copies. When the missing data center becomes accesible again, the cluster enters recovery stretch mode. This changes the warning and allows peering, but still requires only the OSDs from the data center, which was up the whole time. When all PGs are in a known state and are not degraded or incomplete, the cluster goes back to the regular stretch mode, ends the warning, and restores min_size to its starting value 2 . The cluster again requires both sites to peer, not only the site that stayed up the whole time, therefore you can fail over to the other site, if necessary. Stretch mode limitations It is not possible to exit from stretch mode once it is entered. You cannot use erasure-coded pools with clusters in stretch mode. You can neither enter the stretch mode with erasure-coded pools, nor create an erasure-coded pool when the stretch mode is active. Stretch mode with no more than two sites is supported. The weights of the two sites should be the same. If they are not, you receive the following error: Example To achieve same weights on both sites, the Ceph OSDs deployed in the two sites should be of equal size, that is, storage capacity in the first site is equivalent to storage capacity in the second site. While it is not enforced, you should run two Ceph monitors on each site and a tiebreaker, for a total of five. This is because OSDs can only connect to monitors in their own site when in stretch mode. You have to create your own CRUSH rule, which provides two copies on each site, which totals to four on both sites. You cannot enable stretch mode if you have existing pools with non-default size or min_size . Because the cluster runs with min_size 1 when degraded, you should only use stretch mode with all-flash OSDs. This minimizes the time needed to recover once connectivity is restored, and minimizes the potential for data loss. Additional Resources See Troubleshooting clusters in stretch mode for troubleshooting steps. 4.1.1. Setting the crush location for the daemons Before you enter the stretch mode, you need to prepare the cluster by setting the crush location to the daemons in the Red Hat Ceph Storage cluster. There are two ways to do this: Bootstrap the cluster through a service configuration file, where the locations are added to the hosts as part of deployment. Set the locations manually through ceph osd crush add-bucket and ceph osd crush move commands after the cluster is deployed. Method 1: Bootstrapping the cluster Prerequisites Root-level access to the nodes. Procedure If you are bootstrapping your new storage cluster, you can create the service configuration .yaml file that adds the nodes to the Red Hat Ceph Storage cluster and also sets specific labels for where the services should run: Example Bootstrap the storage cluster with the --apply-spec option: Syntax Example Important You can use different command options with the cephadm bootstrap command. However, always include the --apply-spec option to use the service configuration file and configure the host locations. Additional Resources See Bootstrapping a new storage cluster for more information about Ceph bootstrapping and different cephadm bootstrap command options. Method 2: Setting the locations after the deployment Prerequisites Root-level access to the nodes. Procedure Add two buckets to which you plan to set the location of your non-tiebreaker monitors to the CRUSH map, specifying the bucket type as as datacenter : Syntax Example Move the buckets under root=default : Syntax Example Move the OSD hosts according to the required CRUSH placement: Syntax Example 4.1.2. Entering the stretch mode The new stretch mode is designed to handle two sites. There is a lower risk of component availability outages with 2-site clusters. Prerequisites Root-level access to the nodes. The crush location is set to the hosts. Procedure Set the location of each monitor, matching your CRUSH map: Syntax Example Generate a CRUSH rule which places two copies on each data center: Syntax Example Edit the decompiled CRUSH map file to add a new rule: Example 1 The rule id has to be unique. In this example, there is only one more rule with id 0 , thereby the id 1 is used, however you might need to use a different rule ID depending on the number of existing rules. 2 3 In this example, there are two data center buckets named DC1 and DC2 . Note This rule makes the cluster have read-affinity towards data center DC1 . Therefore, all the reads or writes happen through Ceph OSDs placed in DC1 . If this is not desirable, and reads or writes are to be distributed evenly across the zones, the crush rule is the following: Example In this rule, the data center is selected randomly and automatically. See CRUSH rules for more information on firstn and indep options. Inject the CRUSH map to make the rule available to the cluster: Syntax Example If you do not run the monitors in connectivity mode, set the election strategy to connectivity : Example Enter stretch mode by setting the location of the tiebreaker monitor to split across the data centers: Syntax Example In this example the monitor mon.host07 is the tiebreaker. Important The location of the tiebreaker monitor should differ from the data centers to which you previously set the non-tiebreaker monitors. In the example above, it is data center DC3 . Important Do not add this data center to the CRUSH map as it results in the following error when you try to enter stretch mode: Note If you are writing your own tooling for deploying Ceph, you can use a new --set-crush-location option when booting monitors, instead of running the ceph mon set_location command. This option accepts only a single bucket=location pair, for example ceph-mon --set-crush-location 'datacenter=DC1' , which must match the bucket type you specified when running the enable_stretch_mode command. Verify that the stretch mode is enabled successfully: Example The stretch_mode_enabled should be set to true . You can also see the number of stretch buckets, stretch mode buckets, and if the stretch mode is degraded or recovering. Verify that the monitors are in an appropriate locations: Example You can also see which monitor is the tiebreaker, and the monitor election strategy. Additional Resources See Configuring monitor election strategy for more information about monitor election strategy. 4.1.3. Adding OSD hosts in stretch mode You can add Ceph OSDs in the stretch mode. The procedure is similar to the addition of the OSD hosts on a cluster where stretch mode is not enabled. Prerequisites A running Red Hat Ceph Storage cluster. Stretch mode in enabled on a cluster. Root-level access to the nodes. Procedure List the available devices to deploy OSDs: Syntax Example Deploy the OSDs on specific hosts or on all the available devices: Create an OSD from a specific device on a specific host: Syntax Example Deploy OSDs on any available and unused devices: Important This command creates collocated WAL and DB devices. If you want to create non-collocated devices, do not use this command. Example Move the OSD hosts under the CRUSH bucket: Syntax Example Note Ensure you add the same topology nodes on both sites. Issues might arise if hosts are added only on one site. Additional Resources See Adding OSDs for more information about the addition of Ceph OSDs.
[ "ceph mon enable_stretch_mode host05 stretch_rule datacenter Error EINVAL: the 2 datacenter instances in the cluster have differing weights 25947 and 15728 but stretch mode currently requires they be the same!", "service_type: host addr: host01 hostname: host01 location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: host02 hostname: host02 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: host03 hostname: host03 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: host04 hostname: host04 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: host05 hostname: host05 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: host06 hostname: host06 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: host07 hostname: host07 labels: - mon --- service_type: mon placement: label: \"mon\" --- service_id: cephfs placement: label: \"mds\" --- service_type: mgr service_name: mgr placement: label: \"mgr\" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: \"osd\" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: \"rgw\" spec: rgw_frontend_port: 8080", "cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url REGISTRY_URL --registry-username USER_NAME --registry-password PASSWORD", "cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1", "ceph osd crush add-bucket BUCKET_NAME BUCKET_TYPE", "ceph osd crush add-bucket DC1 datacenter ceph osd crush add-bucket DC2 datacenter", "ceph osd crush move BUCKET_NAME root=default", "ceph osd crush move DC1 root=default ceph osd crush move DC2 root=default", "ceph osd crush move HOST datacenter= DATACENTER", "ceph osd crush move host01 datacenter=DC1", "ceph mon set_location HOST datacenter= DATACENTER", "ceph mon set_location host01 datacenter=DC1 ceph mon set_location host02 datacenter=DC1 ceph mon set_location host04 datacenter=DC2 ceph mon set_location host05 datacenter=DC2 ceph mon set_location host07 datacenter=DC3", "ceph osd getcrushmap > COMPILED_CRUSHMAP_FILENAME crushtool -d COMPILED_CRUSHMAP_FILENAME -o DECOMPILED_CRUSHMAP_FILENAME", "ceph osd getcrushmap > crush.map.bin crushtool -d crush.map.bin -o crush.map.txt", "rule stretch_rule { id 1 1 type replicated min_size 1 max_size 10 step take DC1 2 step chooseleaf firstn 2 type host step emit step take DC2 3 step chooseleaf firstn 2 type host step emit }", "rule stretch_rule { id 1 type replicated min_size 1 max_size 10 step take default step choose firstn 0 type datacenter step chooseleaf firstn 2 type host step emit }", "crushtool -c DECOMPILED_CRUSHMAP_FILENAME -o COMPILED_CRUSHMAP_FILENAME ceph osd setcrushmap -i COMPILED_CRUSHMAP_FILENAME", "crushtool -c crush.map.txt -o crush2.map.bin ceph osd setcrushmap -i crush2.map.bin", "ceph mon set election_strategy connectivity", "ceph mon set_location HOST datacenter= DATACENTER ceph mon enable_stretch_mode HOST stretch_rule datacenter", "ceph mon set_location host07 datacenter=DC3 ceph mon enable_stretch_mode host07 stretch_rule datacenter", "Error EINVAL: there are 3 datacenters in the cluster but stretch mode currently only works with 2!", "ceph osd dump epoch 361 fsid 1234ab78-1234-11ed-b1b1-de456ef0a89d created 2023-01-16T05:47:28.482717+0000 modified 2023-01-17T17:36:50.066183+0000 flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit crush_version 31 full_ratio 0.95 backfillfull_ratio 0.92 nearfull_ratio 0.85 require_min_compat_client luminous min_compat_client luminous require_osd_release quincy stretch_mode_enabled true stretch_bucket_count 2 degraded_stretch_mode 0 recovering_stretch_mode 0 stretch_mode_bucket 8", "ceph mon dump epoch 19 fsid 1234ab78-1234-11ed-b1b1-de456ef0a89d last_changed 2023-01-17T04:12:05.709475+0000 created 2023-01-16T05:47:25.631684+0000 min_mon_release 16 (pacific) election_strategy: 3 stretch_mode_enabled 1 tiebreaker_mon host07 disallowed_leaders host07 0: [v2:132.224.169.63:3300/0,v1:132.224.169.63:6789/0] mon.host07; crush_location {datacenter=DC3} 1: [v2:220.141.179.34:3300/0,v1:220.141.179.34:6789/0] mon.host04; crush_location {datacenter=DC2} 2: [v2:40.90.220.224:3300/0,v1:40.90.220.224:6789/0] mon.host01; crush_location {datacenter=DC1} 3: [v2:60.140.141.144:3300/0,v1:60.140.141.144:6789/0] mon.host02; crush_location {datacenter=DC1} 4: [v2:186.184.61.92:3300/0,v1:186.184.61.92:6789/0] mon.host05; crush_location {datacenter=DC2} dumped monmap epoch 19", "ceph orch device ls [--hostname= HOST_1 HOST_2 ] [--wide] [--refresh]", "ceph orch device ls", "ceph orch daemon add osd HOST : DEVICE_PATH", "ceph orch daemon add osd host03:/dev/sdb", "ceph orch apply osd --all-available-devices", "ceph osd crush move HOST datacenter= DATACENTER", "ceph osd crush move host03 datacenter=DC1 ceph osd crush move host06 datacenter=DC2" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/administration_guide/stretch-clusters-for-ceph-storage
Chapter 13. Setting up cross-site replication
Chapter 13. Setting up cross-site replication Ensure availability with Data Grid Operator by configuring geographically distributed clusters as a unified service. You can configure clusters to perform cross-site replication with: Connections that Data Grid Operator manages. Connections that you configure and manage. Note You can use both managed and manual connections for Data Grid clusters in the same Infinispan CR. You must ensure that Data Grid clusters establish connections in the same way at each site. 13.1. Cross-site replication expose types You can use a NodePort service, a LoadBalancer service, or an OpenShift Route to handle network traffic for backup operations between Data Grid clusters. Before you start setting up cross-site replication you should determine what expose type is available for your Red Hat OpenShift cluster. In some cases you may require an administrator to provision services before you can configure an expose type. NodePort A NodePort is a service that accepts network traffic at a static port, in the 30000 to 32767 range, on an IP address that is available externally to the OpenShift cluster. To use a NodePort as the expose type for cross-site replication, an administrator must provision external IP addresses for each OpenShift node. In most cases, an administrator must also configure DNS routing for those external IP addresses. LoadBalancer A LoadBalancer is a service that directs network traffic to the correct node in the OpenShift cluster. Whether you can use a LoadBalancer as the expose type for cross-site replication depends on the host platform. AWS supports network load balancers (NLB) while some other cloud platforms do not. To use a LoadBalancer service, an administrator must first create an ingress controller backed by an NLB. Route An OpenShift Route allows Data Grid clusters to connect with each other through a public secure URL. Data Grid uses TLS with the SNI header to send backup requests between clusters through an OpenShift Route . To do this you must add a keystore with TLS certificates so that Data Grid can encrypt network traffic for cross-site replication. When you specify Route as the expose type for cross-site replication, Data Grid Operator creates a route with TLS passthrough encryption for each Data Grid cluster that it manages. You can specify a hostname for the Route but you cannot specify a Route that you have already created. Additional resources Configuring ingress cluster traffic overview 13.2. Managed cross-site replication Data Grid Operator can discover Data Grid clusters running in different data centers to form global clusters. When you configure managed cross-site connections, Data Grid Operator creates router pods in each Data Grid cluster. Data Grid pods use the <cluster_name>-site service to connect to these router pods and send backup requests. Router pods maintain a record of all pod IP addresses and parse RELAY message headers to forward backup requests to the correct Data Grid cluster. If a router pod crashes then all Data Grid pods start using any other available router pod until OpenShift restores it. Important To manage cross-site connections, Data Grid Operator uses the Kubernetes API. Each OpenShift cluster must have network access to the remote Kubernetes API and a service account token for each backup cluster. Note Data Grid clusters do not start running until Data Grid Operator discovers all backup locations that you configure. 13.2.1. Creating service account tokens for managed cross-site connections Generate service account tokens on OpenShift clusters that allow Data Grid Operator to automatically discover Data Grid clusters and manage cross-site connections. Prerequisites Ensure all OpenShift clusters have access to the Kubernetes API. Data Grid Operator uses this API to manage cross-site connections. Note Data Grid Operator does not modify remote Data Grid clusters. The service account tokens provide read-only access through the Kubernetes API. Procedure Log in to an OpenShift cluster. Create a service account. For example, create a service account at LON : Add the view role to the service account with the following command: If you use a NodePort service to expose Data Grid clusters on the network, you must also add the cluster-reader role to the service account: Repeat the preceding steps on your other OpenShift clusters. Exchange service account tokens on each OpenShift cluster. 13.2.2. Exchanging service account tokens Generate service account tokens on your OpenShift clusters and add them into secrets at each backup location. The tokens that you generate in this procedure do not expire. For bound service account tokens, see Exchanging bound service account tokens . Prerequisites You have created a service account. Procedure Log in to your OpenShift cluster. Create a service account token secret file as follows: sa-token.yaml apiVersion: v1 kind: Secret metadata: name: ispn-xsite-sa-token 1 annotations: kubernetes.io/service-account.name: "<service-account>" 2 type: kubernetes.io/service-account-token 1 Specifies the name of the secret. 2 Specifies the service account name. Create the secret in your OpenShift cluster: oc -n <namespace> create -f sa-token.yaml Retrieve the service account token: oc -n <namespace> get secrets ispn-xsite-sa-token -o jsonpath="{.data.token}" | base64 -d The command prints the token in the terminal. Copy the token for deployment in the backup OpenShift cluster. Log in to the backup OpenShift cluster. Add the service account token for a backup location: oc -n <namespace> create secret generic <token-secret> --from-literal=token=<token> The <token-secret> is the name of the secret configured in the Infinispan CR. steps Repeat the preceding steps on your other OpenShift clusters. Additional resources Creating a service account token secret 13.2.3. Exchanging bound service account tokens Create service account tokens with a limited lifespan and add them into secrets at each backup location. You must refresh the token periodically to prevent Data Grid Operator from losing access to the remote OpenShift cluster. For non-expiring tokens, see Exchanging service account tokens . Prerequisites You have created a service account. Procedure Log in to your OpenShift cluster. Create a bound token for the service account: oc -n <namespace> create token <service-account> Note By default, service account tokens are valid for one hour. Use the command option --duration to specify the lifespan in seconds.. The command prints the token in the terminal. Copy the token for deployment in the backup OpenShift cluster(s). Log in to the backup OpenShift cluster. Add the service account token for a backup location: oc -n <namespace> create secret generic <token-secret> --from-literal=token=<token> The <token-secret> is the name of the secret configured in the Infinispan CR. Repeat the steps on other OpenShift clusters. Deleting expired tokens When a token expires, delete the expired token secret, and then repeat the procedure to generate and exchange a new one. Log in to the backup OpenShift cluster. Delete the expired secret <token-secret> : oc -n <namespace> delete secrets <token-secret> Repeat the procedure to create a new token and generate a new <token-secret> . Additional resources Creating bound service account tokens 13.2.4. Configuring managed cross-site connections Configure Data Grid Operator to establish cross-site views with Data Grid clusters. Prerequisites Determine a suitable expose type for cross-site replication. If you use an OpenShift Route you must add a keystore with TLS certificates and secure cross-site connections. Create and exchange Red Hat OpenShift service account tokens for each Data Grid cluster. Procedure Create an Infinispan CR for each Data Grid cluster. Specify the name of the local site with spec.service.sites.local.name . Configure the expose type for cross-site replication. Set the value of the spec.service.sites.local.expose.type field to one of the following: NodePort LoadBalancer Route Optionally specify a port or custom hostname with the following fields: spec.service.sites.local.expose.nodePort if you use a NodePort service. spec.service.sites.local.expose.port if you use a LoadBalancer service. spec.service.sites.local.expose.routeHostName if you use an OpenShift Route . Specify the number of pods that can send RELAY messages with the service.sites.local.maxRelayNodes field. Tip Configure all pods in your cluster to send RELAY messages for better performance. If all pods send backup requests directly, then no pods need to forward backup requests. Provide the name, URL, and secret for each Data Grid cluster that acts as a backup location with spec.service.sites.locations . If Data Grid cluster names or namespaces at the remote site do not match the local site, specify those values with the clusterName and namespace fields. The following are example Infinispan CR definitions for LON and NYC : LON apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 3 version: <Data Grid_version> service: type: DataGrid sites: local: name: LON expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: NYC clusterName: <nyc_cluster_name> namespace: <nyc_cluster_namespace> url: openshift://api.rhdg-nyc.openshift-aws.myhost.com:6443 secretName: nyc-token logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error NYC apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: nyc-cluster spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid sites: local: name: NYC expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: LON clusterName: infinispan namespace: rhdg-namespace url: openshift://api.rhdg-lon.openshift-aws.myhost.com:6443 secretName: lon-token logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error Important Be sure to adjust logging categories in your Infinispan CR to decrease log levels for JGroups TCP and RELAY2 protocols. This prevents a large number of log files from uses container storage. spec: logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error Configure your Infinispan CRs with any other Data Grid service resources and then apply the changes. Verify that Data Grid clusters form a cross-site view. Retrieve the Infinispan CR. Check for the type: CrossSiteViewFormed condition. steps If your clusters have formed a cross-site view, you can start adding backup locations to caches. Additional resources Data Grid guide to cross-site replication 13.3. Manually configuring cross-site connections You can specify static network connection details to perform cross-site replication with Data Grid clusters running outside OpenShift. Manual cross-site connections are necessary in any scenario where access to the Kubernetes API is not available outside the OpenShift cluster where Data Grid runs. Prerequisites Determine a suitable expose type for cross-site replication. If you use an OpenShift Route you must add a keystore with TLS certificates and secure cross-site connections. Ensure you have the correct host names and ports for each Data Grid cluster and each <cluster-name>-site service. Manually connecting Data Grid clusters to form cross-site views requires predictable network locations for Data Grid services, which means you need to know the network locations before they are created. Procedure Create an Infinispan CR for each Data Grid cluster. Specify the name of the local site with spec.service.sites.local.name . Configure the expose type for cross-site replication. Set the value of the spec.service.sites.local.expose.type field to one of the following: NodePort LoadBalancer Route Optionally specify a port or custom hostname with the following fields: spec.service.sites.local.expose.nodePort if you use a NodePort service. spec.service.sites.local.expose.port if you use a LoadBalancer service. spec.service.sites.local.expose.routeHostName if you use an OpenShift Route . Provide the name and static URL for each Data Grid cluster that acts as a backup location with spec.service.sites.locations , for example: LON apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 3 version: <Data Grid_version> service: type: DataGrid sites: local: name: LON expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: NYC url: infinispan+xsite://infinispan-nyc.myhost.com:7900 logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error NYC apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid sites: local: name: NYC expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: LON url: infinispan+xsite://infinispan-lon.myhost.com logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error Important Be sure to adjust logging categories in your Infinispan CR to decrease log levels for JGroups TCP and RELAY2 protocols. This prevents a large number of log files from uses container storage. spec: logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error Configure your Infinispan CRs with any other Data Grid service resources and then apply the changes. Verify that Data Grid clusters form a cross-site view. Retrieve the Infinispan CR. Check for the type: CrossSiteViewFormed condition. steps If your clusters have formed a cross-site view, you can start adding backup locations to caches. Additional resources Data Grid guide to cross-site replication 13.4. Allocating CPU and memory for Gossip router pod Allocate CPU and memory resources to Data Grid Gossip router. Prerequisite Have Gossip router enabled. The service.sites.local.discovery.launchGossipRouter property must be set to true , which is the default value. Procedure Allocate the number of CPU units using the service.sites.local.discovery.cpu field. Allocate the amount of memory, in bytes, using the service.sites.local.discovery.memory field. The cpu and memory fields have values in the format of <limit>:<requests> . For example, cpu: "2000m:1000m" limits pods to a maximum of 2000m of CPU and requests 1000m of CPU for each pod at startup. Specifying a single value sets both the limit and request. Apply your Infinispan CR. 13.5. Disabling local Gossip router and service The Data Grid Operator starts a Gossip router on each site, but you only need a single Gossip router to manage traffic between the Data Grid cluster members. You can disable the additional Gossip routers to save resources. For example, you have Data Grid clusters in LON and NYC sites. The following procedure shows how you can disable Gossip router in LON site and connect to NYC that has the Gossip router enabled. Procedure Create an Infinispan CR for each Data Grid cluster. Specify the name of the local site with the spec.service.sites.local.name field. For the LON cluster, set false as the value for the spec.service.sites.local.discovery.launchGossipRouter field. For the LON cluster, specify the url with the spec.service.sites.locations.url to connect to the NYC . In the NYC configuration, do not specify the spec.service.sites.locations.url . LON apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 3 service: type: DataGrid sites: local: name: LON discovery: launchGossipRouter: false locations: - name: NYC url: infinispan+xsite://infinispan-nyc.myhost.com:7900 NYC apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 3 service: type: DataGrid sites: local: name: NYC locations: - name: LON Important If you have three or more sites, Data Grid recommends to keep the Gossip router enabled on all the remote sites. When you have multiple Gossip routers and one of them becomes unavailable, the remaining routers continue exchanging messages. If a single Gossip router is defined, and it becomes unavailable, the connection between the remote sites breaks. steps If your clusters have formed a cross-site view, you can start adding backup locations to caches. Additional resources Data Grid cross-site replication 13.6. Resources for configuring cross-site replication The following tables provides fields and descriptions for cross-site resources. Table 13.1. service.type Field Description service.type: DataGrid Data Grid supports cross-site replication with Data Grid service clusters only. Table 13.2. service.sites.local Field Description service.sites.local.name Names the local site where a Data Grid cluster runs. service.sites.local.maxRelayNodes Specifies the maximum number of pods that can send RELAY messages for cross-site replication. The default value is 1 . service.sites.local.discovery.launchGossipRouter If false , the cross-site services and the Gossip router pod are not created in the local site. The default value is true . service.sites.local.discovery.memory Allocates the amount of memory in bytes. It uses the following format <limit>:<requests> (example "2Gi:1Gi" ). service.sites.local.discovery.cpu Allocates the number of CPU units. It uses the following format <limit>:<requests> (example "2000m:1000m" ). service.sites.local.expose.type Specifies the network service for cross-site replication. Data Grid clusters use this service to communicate and perform backup operations. You can set the value to NodePort , LoadBalancer , or Route . service.sites.local.expose.nodePort Specifies a static port within the default range of 30000 to 32767 if you expose Data Grid through a NodePort service. If you do not specify a port, the platform selects an available one. service.sites.local.expose.port Specifies the network port for the service if you expose Data Grid through a LoadBalancer service. The default port is 7900 . service.sites.local.expose.routeHostName Specifies a custom hostname if you expose Data Grid through an OpenShift Route . If you do not set a value then OpenShift generates a hostname. Table 13.3. service.sites.locations Field Description service.sites.locations Provides connection information for all backup locations. service.sites.locations.name Specifies a backup location that matches .spec.service.sites.local.name . service.sites.locations.url Specifies the URL of the Kubernetes API for managed connections or a static URL for manual connections. Use openshift:// to specify the URL of the Kubernetes API for an OpenShift cluster. Note that the openshift:// URL must present a valid, CA-signed certificate. You cannot use self-signed certificates. Use the infinispan+xsite://<hostname>:<port> format for static hostnames and ports. The default port is 7900 . service.sites.locations.secretName Specifies the secret that contains the service account token for the backup site. service.sites.locations.clusterName Specifies the cluster name at the backup location if it is different to the cluster name at the local site. service.sites.locations.namespace Specifies the namespace of the Data Grid cluster at the backup location if it does not match the namespace at the local site. Managed cross-site connections Manual cross-site connections 13.7. Securing cross-site connections Add keystores and trust stores so that Data Grid clusters can secure cross-site replication traffic. You must add a keystore to use an OpenShift Route as the expose type for cross-site replication. Securing cross-site connections is optional if you use a NodePort or LoadBalancer as the expose type. Note Cross-site replication does not support the OpenShift CA service. You must provide your own certificates. Prerequisites Have a PKCS12 keystore that Data Grid can use to encrypt and decrypt RELAY messages. You must provide a keystore for relay pods and router pods to secure cross-site connections. The keystore can be the same for relay pods and router pods or you can provide separate keystores for each. You can also use the same keystore for each Data Grid cluster or a unique keystore for each cluster. Have a PKCS12 trust store that contains part of the certificate chain or root CA certificate that verifies public certificates for Data Grid relay pods and router pods. Procedure Create cross-site encryption secrets. Create keystore secrets. Create trust store secrets. Modify the Infinispan CR for each Data Grid cluster to specify the secret name for the encryption.transportKeyStore.secretName and encryption.routerKeyStore.secretName fields. Configure any other fields to encrypt RELAY messages as required and then apply the changes. apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> expose: type: LoadBalancer service: type: DataGrid sites: local: name: SiteA # ... encryption: protocol: TLSv1.3 transportKeyStore: secretName: transport-tls-secret alias: transport filename: keystore.p12 routerKeyStore: secretName: router-tls-secret alias: router filename: keystore.p12 trustStore: secretName: truststore-tls-secret filename: truststore.p12 locations: # ... 13.7.1. Resources for configuring cross-site encryption The following tables provides fields and descriptions for encrypting cross-site connections. Table 13.4. service.type.sites.local.encryption Field Description service.type.sites.local.encryption.protocol Specifies the TLS protocol to use for cross-site connections. The default value is TLSv1.3 but you can set TLSv1.2 if required. service.type.sites.local.encryption.transportKeyStore Configures a keystore secret for relay pods. service.type.sites.local.encryption.routerKeyStore Configures a keystore secret for router pods. service.type.sites.local.encryption.trustStore Configures a trust store secret for relay pods and router pods. Table 13.5. service.type.sites.local.encryption.transportKeyStore Field Description secretName Specifies the secret that contains a keystore that relay pods can use to encrypt and decrypt RELAY messages. This field is required. alias Optionally specifies the alias of the certificate in the keystore. The default value is transport . filename Optionally specifies the filename of the keystore. The default value is keystore.p12 . Table 13.6. service.type.sites.local.encryption.routerKeyStore Field Description secretName Specifies the secret that contains a keystore that router pods can use to encrypt and decrypt RELAY messages. This field is required. alias Optionally specifies the alias of the certificate in the keystore. The default value is router . filename Optionally specifies the filename of the keystore. The default value is keystore.p12 . Table 13.7. service.type.sites.local.encryption.trustStore Field Description secretName Specifies the secret that contains a trust store to verify public certificates for relay pods and router pods. This field is required. filename Optionally specifies the filename of the trust store. The default value is truststore.p12 . 13.7.2. Cross-site encryption secrets Cross-site replication encryption secrets add keystores and trust store for securing cross-site connections. Cross-site encryption secrets Field Description stringData.password Specifies the password for the keystore or trust store. stringData.type Optionally specifies the keystore or trust store type. The default value is pkcs12 . data.<file-name> Adds a base64-encoded keystore or trust store. 13.8. Configuring sites in the same OpenShift cluster For evaluation and demonstration purposes, you can configure Data Grid to back up between pods in the same OpenShift cluster. Important Using ClusterIP as the expose type for cross-site replication is intended for demonstration purposes only. It would be appropriate to use this expose type only to perform a temporary proof-of-concept deployment on a laptop or something of that nature. Procedure Create an Infinispan CR for each Data Grid cluster. Specify the name of the local site with spec.service.sites.local.name . Set ClusterIP as the value of the spec.service.sites.local.expose.type field. Provide the name of the Data Grid cluster that acts as a backup location with spec.service.sites.locations.clusterName . If both Data Grid clusters have the same name, specify the namespace of the backup location with spec.service.sites.locations.namespace . apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: example-clustera spec: replicas: 1 expose: type: LoadBalancer service: type: DataGrid sites: local: name: SiteA expose: type: ClusterIP maxRelayNodes: 1 locations: - name: SiteB clusterName: example-clusterb namespace: cluster-namespace Configure your Infinispan CRs with any other Data Grid service resources and then apply the changes. Verify that Data Grid clusters form a cross-site view. Retrieve the Infinispan CR. Check for the type: CrossSiteViewFormed condition.
[ "create sa -n <namespace> lon", "policy add-role-to-user view -n <namespace> -z lon", "adm policy add-cluster-role-to-user cluster-reader -z lon -n <namespace>", "apiVersion: v1 kind: Secret metadata: name: ispn-xsite-sa-token 1 annotations: kubernetes.io/service-account.name: \"<service-account>\" 2 type: kubernetes.io/service-account-token", "-n <namespace> create -f sa-token.yaml", "-n <namespace> get secrets ispn-xsite-sa-token -o jsonpath=\"{.data.token}\" | base64 -d", "-n <namespace> create secret generic <token-secret> --from-literal=token=<token>", "-n <namespace> create token <service-account>", "-n <namespace> create secret generic <token-secret> --from-literal=token=<token>", "-n <namespace> delete secrets <token-secret>", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 3 version: <Data Grid_version> service: type: DataGrid sites: local: name: LON expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: NYC clusterName: <nyc_cluster_name> namespace: <nyc_cluster_namespace> url: openshift://api.rhdg-nyc.openshift-aws.myhost.com:6443 secretName: nyc-token logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: nyc-cluster spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid sites: local: name: NYC expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: LON clusterName: infinispan namespace: rhdg-namespace url: openshift://api.rhdg-lon.openshift-aws.myhost.com:6443 secretName: lon-token logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error", "spec: logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error", "get infinispan -o yaml", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 3 version: <Data Grid_version> service: type: DataGrid sites: local: name: LON expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: NYC url: infinispan+xsite://infinispan-nyc.myhost.com:7900 logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid sites: local: name: NYC expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: LON url: infinispan+xsite://infinispan-lon.myhost.com logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error", "spec: logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error", "get infinispan -o yaml", "spec: service: type: DataGrid sites: local: name: LON discovery: launchGossipRouter: true memory: \"2Gi:1Gi\" cpu: \"2000m:1000m\"", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 3 service: type: DataGrid sites: local: name: LON discovery: launchGossipRouter: false locations: - name: NYC url: infinispan+xsite://infinispan-nyc.myhost.com:7900", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 3 service: type: DataGrid sites: local: name: NYC locations: - name: LON", "spec: service: type: DataGrid sites: local: name: LON expose: type: LoadBalancer maxRelayNodes: 1 locations: - name: NYC clusterName: <nyc_cluster_name> namespace: <nyc_cluster_namespace> url: openshift://api.site-b.devcluster.openshift.com:6443 secretName: nyc-token", "spec: service: type: DataGrid sites: local: name: LON expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: NYC url: infinispan+xsite://infinispan-nyc.myhost.com:7900", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> expose: type: LoadBalancer service: type: DataGrid sites: local: name: SiteA # encryption: protocol: TLSv1.3 transportKeyStore: secretName: transport-tls-secret alias: transport filename: keystore.p12 routerKeyStore: secretName: router-tls-secret alias: router filename: keystore.p12 trustStore: secretName: truststore-tls-secret filename: truststore.p12 locations: #", "apiVersion: v1 kind: Secret metadata: name: tls-secret type: Opaque stringData: password: changeme type: pkcs12 data: <file-name>: \"MIIKDgIBAzCCCdQGCSqGSIb3DQEHA...\"", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: example-clustera spec: replicas: 1 expose: type: LoadBalancer service: type: DataGrid sites: local: name: SiteA expose: type: ClusterIP maxRelayNodes: 1 locations: - name: SiteB clusterName: example-clusterb namespace: cluster-namespace", "get infinispan -o yaml" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/setting-up-xsite
27.3. Installing libStorageMgmt
27.3. Installing libStorageMgmt To install libStorageMgmt for use of the command line, required run-time libraries and simulator plug-ins, use the following command: To develop C applications that utilize the library, install the libstoragemgmt-devel package with the following command: To install libStorageMgmt for use with hardware arrays, select one or more of the appropriate plug-in packages with the following command: The following plug-ins that are available include: libstoragemgmt-smis-plugin Generic SMI-S array support. libstoragemgmt-netapp-plugin Specific support for NetApp files. libstoragemgmt-nstor-plugin Specific support for NexentaStor. libstoragemgmt-targetd-plugin Specific support for targetd. The daemon is then installed and configured to run at start after the reboot. To use it immediately without rebooting, start the daemon manually. Managing an array requires support through a plug-in. The base install package includes open source plug-ins for a number of different vendors. Additional plug-in packages will be available separately as array support improves. Currently supported hardware is constantly changing and improving. The libStorageMgmt daemon ( lsmd ) behaves like any standard service for the system. To check the status of the libStorageMgmt service: To stop the service: To start the service:
[ "yum install libstoragemgmt libstoragemgmt-python", "yum install libstoragemgmt-devel", "yum install libstoragemgmt- name -plugin", "systemctl status libstoragemgmt", "systemctl stop libstoragemgmt", "systemctl start libstoragemgmt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-libstoragemgmt-installation
Chapter 2. Monitoring Camel K integrations
Chapter 2. Monitoring Camel K integrations Red Hat Integration - Camel K monitoring is based on the OpenShift monitoring system . This chapter explains how to use the available options for monitoring Red Hat Integration - Camel K integrations at runtime. You can use the Prometheus Operator that is already deployed as part of OpenShift Monitoring to monitor your own applications. Section 2.1, "Enabling user workload monitoring in OpenShift" Section 2.2, "Configuring Camel K integration metrics" Section 2.3, "Adding custom Camel K integration metrics" 2.1. Enabling user workload monitoring in OpenShift OpenShift 4.3 or higher includes an embedded Prometheus Operator already deployed as part of OpenShift Monitoring. This section explains how to enable monitoring of your own application services in OpenShift Monitoring. This option avoids the additional overhead of installing and managing a separate Prometheus instance. Prerequisites You must have cluster administrator access to an OpenShift cluster on which the Camel K Operator is installed. See Installing Camel K . Procedure Enter the following command to check if the cluster-monitoring-config ConfigMap object exists in the openshift-monitoring project : USD oc -n openshift-monitoring get configmap cluster-monitoring-config Create the cluster-monitoring-config ConfigMap if this does not already exist: USD oc -n openshift-monitoring create configmap cluster-monitoring-config Edit the cluster-monitoring-config ConfigMap: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Under data:config.yaml: , set enableUserWorkload to true : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true Additional resources Enabling monitoring for user-defined projects 2.2. Configuring Camel K integration metrics You can configure monitoring of Camel K integrations automatically using the Camel K Prometheus trait at runtime. This automates the configuration of dependencies and integration Pods to expose a metrics endpoint, which is then discovered and displayed by Prometheus. The Camel Quarkus MicroProfile Metrics extension automatically collects and exposes the default Camel K metrics in the OpenMetrics format. Prerequisites You must have already enabled monitoring of your own services in OpenShift. See Enabling user workload monitoring in OpenShift . Procedure Enter the following command to run your Camel K integration with the Prometheus trait enabled: kamel run myIntegration.java -t prometheus.enabled=true Alternatively, you can enable the Prometheus trait globally once, by updating the integration platform as follows: USD oc patch itp camel-k --type=merge -p '{"spec":{"traits":{"prometheus":{"configuration":{"enabled":true}}}}}' View monitoring of Camel K integration metrics in Prometheus. For example, for embedded Prometheus, select Monitoring > Metrics in the OpenShift administrator or developer web console. Enter the Camel K metric that you want to view. For example, in the Administrator console, under Insert Metric at Cursor , enter application_camel_context_uptime_seconds , and click Run Queries . Click Add Query to view additional metrics. Default Camel Metrics provided by PROMETHEUS TRAIT Some Camel specific metrics are available out of the box. Name Type Description application_camel_message_history_processing timer Sample of performance of each node in the route when message history is enabled application_camel_route_count gauge Number of routes added application_camel_route_running_count gauge Number of routes runnning application_camel_[route or context]_exchanges_inflight_count gauge Route inflight messages for a CamelContext or a route application_camel_[route or context]_exchanges_total counter Total number of processed exchanges for a CamelContext or a route application_camel_[route or context]_exchanges_completed_total counter Number of successfully completed exchange for a CamelContext or a route application_camel_[route or context]_exchanges_failed_total counter Number of failed exchanges for a CamelContext or a route application_camel_[route or context]_failuresHandled_total counter Number of failures handled for a CamelContext or a route application_camel_[route or context]_externalRedeliveries_total counter Number of external initiated redeliveries (such as from JMS broker) for a CamelContext or a route application_camel_context_status gauge The status of the Camel Context application_camel_context_uptime_seconds gauge The amount of time since the Camel Context was started application_camel_[route or exchange] processing [rate_per_second or one_min_rate_per_second or five_min_rate_per_second or fifteen_min_rate_per_second or min_seconds or max_seconds or mean_second or stddev_seconds] gauge Exchange message or route processing with multiple options application_camel_[route or exchange]_processing_seconds summary Exchange message or route processing metric Additional resources Prometheus Trait Camel Quarkus MicroProfile Metrics 2.3. Adding custom Camel K integration metrics You can add custom metrics to your Camel K integrations by using Camel MicroProfile Metrics component and annotations in your Java code. These custom metrics will then be automatically discovered and displayed by Prometheus. This section shows examples of adding Camel MicroProfile Metrics annotations to Camel K integration and service implementation code. Prerequisites You must have already enabled monitoring of your own services in OpenShift. See Enabling user workload monitoring in OpenShift . Procedure Register the custom metrics in your Camel integration code using Camel MicroProfile Metrics component annotations. The following example shows a Metrics.java integration: // camel-k: language=java trait=prometheus.enabled=true dependency=mvn:org.my/app:1.0 1 import org.apache.camel.Exchange; import org.apache.camel.LoggingLevel; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.component.microprofile.metrics.MicroProfileMetricsConstants; import javax.enterprise.context.ApplicationScoped; @ApplicationScoped public class Metrics extends RouteBuilder { @Override public void configure() { onException() .handled(true) .maximumRedeliveries(2) .logStackTrace(false) .logExhausted(false) .log(LoggingLevel.ERROR, "Failed processing USD{body}") // Register the 'redelivery' meter .to("microprofile-metrics:meter:redelivery?mark=2") // Register the 'error' meter .to("microprofile-metrics:meter:error"); 2 from("timer:stream?period=1000") .routeId("unreliable-service") .setBody(header(Exchange.TIMER_COUNTER).prepend("event #")) .log("Processing USD{body}...") // Register the 'generated' meter .to("microprofile-metrics:meter:generated") 3 // Register the 'attempt' meter via @Metered in Service.java .bean("service") 4 .filter(header(Exchange.REDELIVERED)) .log(LoggingLevel.WARN, "Processed USD{body} after USD{header.CamelRedeliveryCounter} retries") .setHeader(MicroProfileMetricsConstants.HEADER_METER_MARK, header(Exchange.REDELIVERY_COUNTER)) // Register the 'redelivery' meter .to("microprofile-metrics:meter:redelivery") 5 .end() .log("Successfully processed USD{body}") // Register the 'success' meter .to("microprofile-metrics:meter:success"); 6 } } 1 Uses the Camel K modeline to automatically configure the Prometheus trait and Maven dependencies 2 error : Metric for the number of errors corresponding to the number of events that have not been processed 3 generated : Metric for the number of events to be processed 4 attempt : Metric for the number of calls made to the service bean to process incoming events 5 redelivery : Metric for the number of retries made to process the event 6 success : Metric for the number of events successfully processed Add Camel MicroProfile Metrics annotations to any implementation files as needed. The following example shows the service bean called by the Camel K integration, which generates random failures: package com.redhat.integration; import java.util.Random; import org.apache.camel.Exchange; import org.apache.camel.RuntimeExchangeException; import org.eclipse.microprofile.metrics.Meter; import org.eclipse.microprofile.metrics.annotation.Metered; import org.eclipse.microprofile.metrics.annotation.Metric; import javax.inject.Named; import javax.enterprise.context.ApplicationScoped; @Named("service") @ApplicationScoped @io.quarkus.arc.Unremovable public class Service { //Register the attempt meter @Metered(absolute = true) public void attempt(Exchange exchange) { 1 Random rand = new Random(); if (rand.nextDouble() < 0.5) { throw new RuntimeExchangeException("Random failure", exchange); 2 } } } 1 The @Metered MicroProfile Metrics annotation declares the meter and the name is automatically generated based on the metrics method name, in this case, attempt . 2 This example fails randomly to help generate errors for metrics. Follow the steps in Configuring Camel K integration metrics to run the integration and view the custom Camel K metrics in Prometheus. In this case, the example already uses the Camel K modeline in Metrics.java to automatically configure Prometheus and the required Maven dependencies for Service.java . Additional resources Camel MicroProfile Metrics component Camel Quarkus MicroProfile Metrics Extension
[ "oc -n openshift-monitoring get configmap cluster-monitoring-config", "oc -n openshift-monitoring create configmap cluster-monitoring-config", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true", "kamel run myIntegration.java -t prometheus.enabled=true", "oc patch itp camel-k --type=merge -p '{\"spec\":{\"traits\":{\"prometheus\":{\"configuration\":{\"enabled\":true}}}}}'", "// camel-k: language=java trait=prometheus.enabled=true dependency=mvn:org.my/app:1.0 1 import org.apache.camel.Exchange; import org.apache.camel.LoggingLevel; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.component.microprofile.metrics.MicroProfileMetricsConstants; import javax.enterprise.context.ApplicationScoped; @ApplicationScoped public class Metrics extends RouteBuilder { @Override public void configure() { onException() .handled(true) .maximumRedeliveries(2) .logStackTrace(false) .logExhausted(false) .log(LoggingLevel.ERROR, \"Failed processing USD{body}\") // Register the 'redelivery' meter .to(\"microprofile-metrics:meter:redelivery?mark=2\") // Register the 'error' meter .to(\"microprofile-metrics:meter:error\"); 2 from(\"timer:stream?period=1000\") .routeId(\"unreliable-service\") .setBody(header(Exchange.TIMER_COUNTER).prepend(\"event #\")) .log(\"Processing USD{body}...\") // Register the 'generated' meter .to(\"microprofile-metrics:meter:generated\") 3 // Register the 'attempt' meter via @Metered in Service.java .bean(\"service\") 4 .filter(header(Exchange.REDELIVERED)) .log(LoggingLevel.WARN, \"Processed USD{body} after USD{header.CamelRedeliveryCounter} retries\") .setHeader(MicroProfileMetricsConstants.HEADER_METER_MARK, header(Exchange.REDELIVERY_COUNTER)) // Register the 'redelivery' meter .to(\"microprofile-metrics:meter:redelivery\") 5 .end() .log(\"Successfully processed USD{body}\") // Register the 'success' meter .to(\"microprofile-metrics:meter:success\"); 6 } }", "package com.redhat.integration; import java.util.Random; import org.apache.camel.Exchange; import org.apache.camel.RuntimeExchangeException; import org.eclipse.microprofile.metrics.Meter; import org.eclipse.microprofile.metrics.annotation.Metered; import org.eclipse.microprofile.metrics.annotation.Metric; import javax.inject.Named; import javax.enterprise.context.ApplicationScoped; @Named(\"service\") @ApplicationScoped @io.quarkus.arc.Unremovable public class Service { //Register the attempt meter @Metered(absolute = true) public void attempt(Exchange exchange) { 1 Random rand = new Random(); if (rand.nextDouble() < 0.5) { throw new RuntimeExchangeException(\"Random failure\", exchange); 2 } } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/developing_and_managing_integrations_using_camel_k/monitoring-camel-k
Chapter 15. Provisioning cloud instances on Google Compute Engine
Chapter 15. Provisioning cloud instances on Google Compute Engine Satellite can interact with Google Compute Engine (GCE), including creating new virtual machines and controlling their power management states. You can only use golden images supported by Red Hat with Satellite for creating GCE hosts. Prerequisites Configure a domain and subnet on Satellite. For more information about networking requirements, see Chapter 3, Configuring networking . You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in Managing content . Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing content . In your GCE project, configure a service account with the necessary IAM Compute role. For more information, see Compute Engine IAM roles in the GCE documentation. In your GCE project-wise metadata, set the enable-oslogin to FALSE . For more information, see Enabling or disabling OS Login in the GCE documentation. Optional: If you want to use Puppet with GCE hosts, navigate to Administer > Settings > Puppet and enable the Use UUID for certificates setting to configure Puppet to use consistent Puppet certificate IDs. Based on your needs, associate a finish or user_data provisioning template with the operating system you want to use. For more information, see Provisioning Templates in Provisioning hosts . 15.1. Adding a Google GCE connection to Satellite Server Use this procedure to add Google Compute Engine (GCE) as a compute resource in Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In Google GCE, generate a service account key in JSON format. For more information, see Create and manage service account keys in the GCE documentation. In the Satellite web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource . In the Name field, enter a name for the compute resource. From the Provider list, select Google . Optional: In the Description field, enter a description for the resource. In the JSON key field, click Choose File and locate your service account key for upload from your local machine. Click Load Zones to populate the list of zones from your GCE environment. From the Zone list, select the GCE zone to use. Click Submit . CLI procedure In Google GCE, generate a service account key in JSON format. For more information, see Create and manage service account keys in the GCE documentation. Copy the file from your local machine to Satellite Server: On Satellite Server, change the owner for your service account key to the foreman user: On Satellite Server, configure permissions for your service account key to ensure that the file is readable: On Satellite Server, restore SELinux context for your service account key: Use the hammer compute-resource create command to add a GCE compute resource to Satellite: 15.2. Adding Google Compute Engine images to Satellite Server To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your Satellite Server. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and click the name of the Google Compute Engine connection. Click Create Image . In the Name field, enter a name for the image. From the Operating System list, select the base operating system of the image. From the Architecture list, select the operating system architecture. In the Username field, enter the SSH user name for image access. Specify a user other than root , because the root user cannot connect to a GCE instance using SSH keys. The username must begin with a letter and consist of lowercase letters and numbers. From the Image list, select an image from the Google Compute Engine compute resource. Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data. Click Submit to save the image details. CLI procedure Create the image with the hammer compute-resource image create command. With the --username option, specify a user other than root , because the root user cannot connect to a GCE instance using SSH keys. The username must begin with a letter and consist of lowercase letters and numbers. 15.3. Adding Google GCE details to a compute profile Use this procedure to add Google GCE hardware settings to a compute profile. When you create a host on Google GCE using this compute profile, these settings are automatically populated. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles . In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile , enter a Name , and click Submit . Click the name of the GCE compute resource. From the Machine Type list, select the machine type to use for provisioning. From the Image list, select the image to use for provisioning. From the Network list, select the Google GCE network to use for provisioning. Optional: Select the Associate Ephemeral External IP checkbox to assign a dynamic ephemeral IP address that Satellite uses to communicate with the host. This public IP address changes when you reboot the host. If you need a permanent IP address, reserve a static public IP address on Google GCE and attach it to the host. In the Size (GB) field, enter the size of the storage to create on the host. Click Submit to save the compute profile. CLI procedure Create a compute profile to use with the Google GCE compute resource: Add GCE details to the compute profile: 15.4. Creating image-based hosts on Google Compute Engine In Satellite, you can use Google Compute Engine provisioning to create hosts from an existing image. The new host entry triggers the Google Compute Engine server to create the instance using the pre-existing image as a basis for the new volume. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Optional: Click the Organization tab and change the organization context to match your requirement. Optional: Click the Location tab and change the location context to match your requirement. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form. From the Deploy on list, select the Google Compute Engine connection. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings. From the Lifecycle Environment list, select the environment. Click the Interfaces tab, and on the interface of the host, click Edit . Verify that the fields are populated with values. Note in particular: Satellite automatically assigns an IP address for the new host. Ensure that the MAC address field is blank. Google Compute Engine assigns a MAC address to the host during provisioning. The Name from the Host tab becomes the DNS name . The Domain field is populated with the required domain. Ensure that Satellite automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. Click OK to save. To add another interface, click Add Interface . You can select only one interface for Provision and Primary . Click the Operating System tab, and confirm that all fields automatically contain values. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host entry. CLI procedure Create the host with the hammer host create command and include --provision-method image . Replace the values in the following example with the appropriate values for your environment. For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command. 15.5. Deleting a VM on Google GCE You can delete VMs running on Google GCE on your Satellite Server. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your Google GCE provider. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the Google GCE compute resource while retaining any associated hosts within Satellite. If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually.
[ "scp My_GCE_Key .json [email protected]:/etc/foreman/ My_GCE_Key .json", "chown root:foreman /etc/foreman/ My_GCE_Key .json", "chmod 0640 /etc/foreman/ My_GCE_Key .json", "restorecon -vv /etc/foreman/ My_GCE_Key .json", "hammer compute-resource create --key-path \"/etc/foreman/ My_GCE_Key .json\" --name \" My_GCE_Compute_Resource \" --provider \"gce\" --zone \" My_Zone \"", "hammer compute-resource image create --name ' gce_image_name ' --compute-resource ' gce_cr ' --operatingsystem-id 1 --architecture-id 1 --uuid ' 3780108136525169178 ' --username ' admin '", "hammer compute-profile create --name My_GCE_Compute_Profile", "hammer compute-profile values create --compute-attributes \"machine_type=f1-micro,associate_external_ip=true,network=default\" --compute-profile \" My_GCE_Compute_Profile \" --compute-resource \" My_GCE_Compute_Resource \" --volume \" size_gb=20 \"", "hammer host create --architecture x86_64 --compute-profile \" My_Compute_Profile \" --compute-resource \" My_Compute_Resource \" --image \" My_GCE_Image \" --interface \"type=interface,domain_id=1,managed=true,primary=true,provision=true\" --location \" My_Location \" --name \" My_Host_Name \" --operatingsystem \" My_Operating_System \" --organization \" My_Organization \" --provision-method \"image\" --puppet-ca-proxy-id My_Puppet_CA_Proxy_ID --puppet-environment-id My_Puppet_Environment_ID --puppet-proxy-id My_Puppet_Proxy_ID --root-password \" My_Root_Password \"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/provisioning_hosts/provisioning_cloud_instances_on_google_compute_engine_gce-provisioning
Vulnerability reporting with Clair on Red Hat Quay
Vulnerability reporting with Clair on Red Hat Quay Red Hat Quay 3.13 Vulnerability reporting with Clair on Red Hat Quay Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/vulnerability_reporting_with_clair_on_red_hat_quay/index
Chapter 32. Defining SELinux User Maps
Chapter 32. Defining SELinux User Maps Security-enhanced Linux (SELinux) sets rules over what system users can access processes, files, directories, and system settings. Both the system administrator and system applications can define security contexts that restrict or allow access from other applications. As part of defining centralized security policies in the Identity Management domain, Identity Management provides a way to map IdM users to existing SELinux user contexts and grant or restrict access to clients and services within the IdM domain, per host, based on the defined SELinux policies. 32.1. About Identity Management, SELinux, and Mapping Users Identity Management does not create or modify the SELinux contexts on a system. Rather, it uses strings that might match existing contexts on the target hosts as the basis for mapping IdM users in the domain to SELinux users on a system. Security-enhanced Linux defines kernel-level, mandatory access controls for how processes can interact with other resources on a system. Based on the expected behavior of processes on the system, and on their security implications, specific rules called policies are set. This is in contrast to higher-level discretionary access controls which are concerned primarily with file ownership and user identity. Every resource on a system is assigned a context. Resources include users, applications, files, and processes. System users are associated with an SELinux role . The role is assigned both a multilayer security context (MLS) and a multi-category security context (MCS). The MLS and MCS contexts confine users so that they can only access certain processes, files, and operations on the system. To get the full list of available SELinux users: For more information about SELinux in Red Hat Enterprise Linux, see Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide . SELinux users and policies function at the system level, not the network level. This means that SELinux users are configured independently on each system. While this is acceptable in many situations, as SELinux has common defined system users and SELinux-aware services define their own policies, it causes problems when remote users and systems access local resources. Remote users and services can be assigned a default guest context without knowing what their actual SELinux user and role should be. Identity Management can integrate an identity domain with local SELinux services. Identity Management can map IdM users to configured SELinux roles per host, per host group , or based on an HBAC rule . Mapping SELinux and IdM users improves user administration: Remote users can be granted appropriate SELinux user contexts based on their IdM group assignments. This also allows administrators to consistently apply the same policies to the same users without having to create local accounts or reconfigure SELinux. The SELinux context associated with a user is centralized. SELinux policies can be planned and related to domain-wide security policies through settings like IdM host-based access control rules. Administrators gain environment-wide visibility and control over how users and systems are assigned in SELinux. An SELinux user map defines two separate relationships that exist between three parts: the SELinux user for the system, an IdM user, and an IdM host. First, the SELinux user map defines a relationship between the SELinux user and the IdM host (the local or target system). Second, it defines a relationship between the SELinux user and the IdM user. This arrangement allows administrators to set different SELinux users for the same IdM users, depending on which host they are accessing. The core of an SELinux mapping rule is the SELinux system user. Each map is first associated with an SELinux user. The SELinux users which are available for mapping are configured in the IdM server, so there is a central and universal list. In this way, IdM defines a set of SELinux users it knows about and can associate with an IdM user upon login. By default, these are: unconfined_u (also used as a default for IdM users) guest_u xguest_u user_u staff_u However, this default list can be modified and any native SELinux user (see Section 32.1, "About Identity Management, SELinux, and Mapping Users" ) can be added or removed from the central IdM SELinux users list. In the IdM server configuration, each SELinux user is configured with not only its user name but also its MLS and MCS range, SELinux_user:MLS[:MCS] . The IPA server uses this format to identify the SELinux user when configuring maps. The IdM user and host configuration is very flexible. Users and hosts can be explicitly and individually assigned to an SELinux user map, or user groups or host groups can be explicitly assigned to the map. You can also associate SELinux mapping rules with host-based access control rules to make administration easier, to avoid duplicating the same rule in two places, and to keep the rules synchronized. As long as the host-based access control rule defines a user and a host, you can use it for an SELinux user map. Host-based access control rules (described in Chapter 31, Configuring Host-Based Access Control ) help integrate SELinux user maps with other access controls in IdM and can help limit or allow host-based user access for remote users, as well as define local security contexts. Note If a host-based access control rule is associated with an SELinux user map, the host-based access control rule cannot be deleted until it is removed from the SELinux user map configuration. SELinux user maps work with the System Security Services Daemon (SSSD) and the pam_selinux module. When a remote user attempts to log into a machine, SSSD checks its IdM identity provider to collect the user information, including any SELinux maps. The PAM module then processes the user and assigns it the appropriate SELinux user context. SSSD caching enables the mapping to work offline.
[ "semanage user -l Labelling MLS/ MLS/ SELinux User Prefix MCS Level MCS Range SELinux Roles guest_u user s0 s0 guest_r root user s0 s0-s0:c0.c1023 staff_r sysadm_r system_r unconfined_r staff_u user s0 s0-s0:c0.c1023 staff_r sysadm_r system_r unconfined_r sysadm_u user s0 s0-s0:c0.c1023 sysadm_r system_u user s0 s0-s0:c0.c1023 system_r unconfined_r unconfined_u user s0 s0-s0:c0.c1023 system_r unconfined_r user_u user s0 s0 user_r xguest_u user s0 s0 xguest_r" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/selinux-mapping
Chapter 24. Managing routing endpoints
Chapter 24. Managing routing endpoints The JMX Navigator view lets you add or delete routing endpoints. Important These changes are not persistent across routing context restarts. 24.1. Adding a routing endpoint Overview When testing a new scenario, you might want to add a new endpoint to a routing context. Procedure To add an endpoint to a routing context: In the JMX Navigator view, under the routing context node, select the Endpoints child to which you want to add an endpoint. Right-click the selected node to open the context menu, and then select Create Endpoint . In the Create Endpoint dialog, enter a URL that defines the new endpoint, for example, file://target/messages/validOrders . Click OK . Right-click the routing context node, and select Refresh . The new destination appears in the JMX Navigator view under the Endpoints node, in a folder that corresponds to the type of endpoint it is, for example, file . Related topics Section 24.2, "Deleting a routing endpoint" 24.2. Deleting a routing endpoint Overview When testing failover scenarios or other scenarios that involve handling failures, it is helpful to be able to remove an endpoint from a routing context. Procedure To delete a routing endpoint: In the JMX Navigator view, select the endpoint you want delete. Right-click the selected endpoint to open the context menu, and then select Delete Endpoint . The tooling deletes the endpoint. To remove the deleted endpoint from the view, right-click the Endpoints node, and select Refresh . The endpoint disappears from the JMX Navigator view. Note To remove the endpoint's node from the Project Explorer view without rerunning the project, you need to explicitly delete it by right-clicking the node and selecting Delete . To remove it from view, refresh the project display. Related topics Section 24.1, "Adding a routing endpoint"
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/ridermanageendpoints
4.9.3. Sorting LVM Reports
4.9.3. Sorting LVM Reports Normally the entire output of the lvs , vgs , or pvs command has to be generated and stored internally before it can be sorted and columns aligned correctly. You can specify the --unbuffered argument to display unsorted output as soon as it is generated. To specify an alternative ordered list of columns to sort on, use the -O argument of any of the reporting commands. It is not necessary to include these fields within the output itself. The following example shows the output of the pvs command that displays the physical volume name, size, and free space. The following example shows the same output, sorted by the free space field. The following example shows that you do not need to display the field on which you are sorting. To display a reverse sort, precede a field you specify after the -O argument with the - character.
[ "pvs -o pv_name,pv_size,pv_free PV PSize PFree /dev/sdb1 17.14G 17.14G /dev/sdc1 17.14G 17.09G /dev/sdd1 17.14G 17.14G", "pvs -o pv_name,pv_size,pv_free -O pv_free PV PSize PFree /dev/sdc1 17.14G 17.09G /dev/sdd1 17.14G 17.14G /dev/sdb1 17.14G 17.14G", "pvs -o pv_name,pv_size -O pv_free PV PSize /dev/sdc1 17.14G /dev/sdd1 17.14G /dev/sdb1 17.14G", "pvs -o pv_name,pv_size,pv_free -O -pv_free PV PSize PFree /dev/sdd1 17.14G 17.14G /dev/sdb1 17.14G 17.14G /dev/sdc1 17.14G 17.09G" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/report_sorting
Chapter 5. Integrating Red Hat Satellite and Ansible Automation Controller
Chapter 5. Integrating Red Hat Satellite and Ansible Automation Controller You can integrate Red Hat Satellite and Ansible Automation Controller to use Satellite Server as a dynamic inventory source for Ansible Automation Controller. Ansible Automation Controller is a component of the Red Hat Ansible Automation Platform. You can also use the provisioning callback function to run playbooks on hosts managed by Satellite, from either the host or Ansible Automation Controller. When provisioning new hosts from Satellite Server, you can use the provisioning callback function to trigger playbook runs from Ansible Automation Controller. The playbook configures the host after the provisioning process. 5.1. Adding Satellite Server to Ansible Automation Controller as a dynamic inventory item To add Satellite Server to Ansible Automation Controller as a dynamic inventory item, you must create a credential for a Satellite Server user on Ansible Automation Controller, add an Ansible Automation Controller user to the credential, and then configure an inventory source. Prerequisites If your Satellite deployment is large, for example, managing tens of thousands of hosts, using a non-admin user can negatively impact performance because of time penalties that accrue during authorization checks. For large deployments, consider using an admin user. For non-admin users, you must assign the Ansible Tower Inventory Reader role to your Satellite Server user. For more information about managing users, roles, and permission filters, see Creating and Managing Roles in Administering Red Hat Satellite . You must host your Satellite Server and Ansible Automation Controller on the same network or subnet. Procedure In the Ansible Automation Controller web UI, create a credential for your Satellite. For more information about creating credentials, see Add a New Credential and Red Hat Satellite Credentials in the Automation Controller User Guide . Table 5.1. Satellite credentials Credential Type : Red Hat Satellite 6 Satellite URL : https:// satellite.example.com Username : The username of the Satellite user with the integration role. Password : The password of the Satellite user. Add an Ansible Automation Controller user to the new credential. For more information about adding a user to a credential, see Getting Started with Credentials in the Automation Controller User Guide . Add a new inventory. For more information, see Add a new inventory in the Automation Controller User Guide . In the new inventory, add Satellite Server as the inventory source, specifying the following inventory source options. For more information, see Add Source in the Automation Controller User Guide . Table 5.2. Inventory source options Source Red Hat Satellite 6 Credential The credential you create for Satellite Server. Overwrite Select Overwrite Variables Select Update on Launch Select Cache Timeout 90 Ensure that you synchronize the source that you add. 5.2. Configuring provisioning callback for a host When you create hosts in Satellite, you can use Ansible Automation Controller to run playbooks to configure your newly created hosts. This is called provisioning callback in Ansible Automation Controller. The provisioning callback function triggers a playbook run from Ansible Automation Controller as part of the provisioning process. The playbook configures the host after the provisioning process. For more information about provisioning callbacks, see Provisioning Callbacks in the Automation Controller User Guide . In Satellite Server, the Kickstart Default and Kickstart Default Finish templates include three snippets: ansible_provisioning_callback ansible_tower_callback_script ansible_tower_callback_service You can add parameters to hosts or host groups to provide the credentials that these snippets can use to run Ansible playbooks on your newly created hosts. Prerequisites Before you can configure provisioning callbacks, you must add Satellite as a dynamic inventory in Ansible Automation Controller. For more information, see Integrating Satellite and Ansible Automation Controller . In the Ansible Automation Controller web UI, you must complete the following tasks: Create a machine credential for your new host. Ensure that you enter the same password in the credential that you plan to assign to the host that you create in Satellite. For more information, see Add a New Credential in the Automation Controller User Guide . Create a project. For more information, see Projects in the Ansible Automation Controller User Guide . Add a job template to your project. For more information, see Job Templates in the Automation Controller User Guide . In your job template, you must enable provisioning callbacks, generate the host configuration key, and note the template_ID of your job template. For more information about job templates, see Job Templates in the Automation Controller User Guide . Procedure In the Satellite web UI, navigate to Configure > Host Group . Create a host group or edit an existing host group. In the Host Group window, click the Parameters tab. Click Add Parameter . Enter the following information for each new parameter: Table 5.3. Host parameters Name Value Description ansible_tower_provisioning true Enables Provisioning Callback. ansible_tower_fqdn controller.example.com The fully qualified domain name (FQDN) of your Ansible Automation Controller. Do not add https because this is appended by Satellite. ansible_job_template_id template_ID The ID of your provisioning template that you can find in the URL of the template: /templates/job_template/ 5 . ansible_host_config_key config_KEY The host configuration key that your job template generates in Ansible Automation Controller. Click Submit . Create a host using the host group. On the new host, enter the following command to start the ansible-callback service: On the new host, enter the following command to output the status of the ansible-callback service: Provisioning callback is configured correctly if the command returns the following output: Manual provisioning callback You can use the provisioning callback URL and the host configuration key from a host to call Ansible Automation Controller: Ensure that you use https when you enter the provisioning callback URL. This triggers the playbook run specified in the template against the host.
[ "systemctl start ansible-callback", "systemctl status ansible-callback", "SAT_host systemd[1]: Started Provisioning callback to Ansible Automation Controller", "curl -k -s --data curl --insecure --data host_config_key= my_config_key https:// controller.example.com /api/v2/job_templates/ 8 /callback/" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_configurations_using_ansible_integration/integrating_satellite_and_ansible_automation_controller_ansible
4.6. Booleans
4.6. Booleans Booleans allow parts of SELinux policy to be changed at runtime, without any knowledge of SELinux policy writing. This allows changes, such as allowing services access to NFS volumes, without reloading or recompiling SELinux policy. 4.6.1. Listing Booleans For a list of Booleans, an explanation of what each one is, and whether they are on or off, run the semanage boolean -l command as the Linux root user. The following example does not list all Booleans and the output is shortened for brevity: Note To have more detailed descriptions, install the selinux-policy-devel package. The SELinux boolean column lists Boolean names. The Description column lists whether the Booleans are on or off, and what they do. The getsebool -a command lists Booleans, whether they are on or off, but does not give a description of each one. The following example does not list all Booleans: Run the getsebool boolean-name command to only list the status of the boolean-name Boolean: Use a space-separated list to list multiple Booleans: 4.6.2. Configuring Booleans Run the setsebool utility in the setsebool boolean_name on/off form to enable or disable Booleans. The following example demonstrates configuring the httpd_can_network_connect_db Boolean: Procedure 4.5. Configuring Booleans By default, the httpd_can_network_connect_db Boolean is off, preventing Apache HTTP Server scripts and modules from connecting to database servers: To temporarily enable Apache HTTP Server scripts and modules to connect to database servers, enter the following command as root: Use the getsebool utility to verify the Boolean has been enabled: This allows Apache HTTP Server scripts and modules to connect to database servers. This change is not persistent across reboots. To make changes persistent across reboots, run the setsebool -P boolean-name on command as root: [3] 4.6.3. Shell Auto-Completion It is possible to use shell auto-completion with the getsebool , setsebool , and semanage utilities. Use the auto-completion with getsebool and setsebool to complete both command-line parameters and Booleans. To list only the command-line parameters, add the hyphen character ("-") after the command name and hit the Tab key: To complete a Boolean, start writing the Boolean name and then hit Tab : The semanage utility is used with several command-line arguments that are completed one by one. The first argument of a semanage command is an option, which specifies what part of SELinux policy is managed: Then, one or more command-line parameters follow: Finally, complete the name of a particular SELinux entry, such as a Boolean, SELinux user, domain, or another. Start typing the entry and hit Tab : Command-line parameters can be chained in a command: [3] To temporarily revert to the default behavior, as the Linux root user, run the setsebool httpd_can_network_connect_db off command. For changes that persist across reboots, run the setsebool -P httpd_can_network_connect_db off command.
[ "~]# semanage boolean -l SELinux boolean State Default Description smartmon_3ware (off , off) Determine whether smartmon can mpd_enable_homedirs (off , off) Determine whether mpd can traverse", "~]USD getsebool -a cvs_read_shadow --> off daemons_dump_core --> on", "~]USD getsebool cvs_read_shadow cvs_read_shadow --> off", "~]USD getsebool cvs_read_shadow daemons_dump_core cvs_read_shadow --> off daemons_dump_core --> on", "~]USD getsebool httpd_can_network_connect_db httpd_can_network_connect_db --> off", "~]# setsebool httpd_can_network_connect_db on", "~]USD getsebool httpd_can_network_connect_db httpd_can_network_connect_db --> on", "~]# setsebool -P httpd_can_network_connect_db on", "~]# setsebool -[Tab] -P", "~]USD getsebool samba_[Tab] samba_create_home_dirs samba_export_all_ro samba_run_unconfined samba_domain_controller samba_export_all_rw samba_share_fusefs samba_enable_home_dirs samba_portmapper samba_share_nfs", "~]# setsebool -P virt_use_[Tab] virt_use_comm virt_use_nfs virt_use_sanlock virt_use_execmem virt_use_rawip virt_use_usb virt_use_fusefs virt_use_samba virt_use_xserver", "~]# semanage [Tab] boolean export import login node port dontaudit fcontext interface module permissive user", "~]# semanage fcontext -[Tab] -a -D --equal --help -m -o --add --delete -f -l --modify -S -C --deleteall --ftype --list -n -t -d -e -h --locallist --noheading --type", "~]# semanage fcontext -a -t samba<tab> samba_etc_t samba_secrets_t sambagui_exec_t samba_share_t samba_initrc_exec_t samba_unconfined_script_exec_t samba_log_t samba_unit_file_t samba_net_exec_t", "~]# semanage port -a -t http_port_t -p tcp 81" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Working_with_SELinux-Booleans
Chapter 59. SummaryService
Chapter 59. SummaryService 59.1. GetSummaryCounts GET /v1/summary/counts Deprecated starting 4.5.0 release, scheduled for removal starting 4.7.0. 59.1.1. Description 59.1.2. Parameters 59.1.3. Return Type V1SummaryCountsResponse 59.1.4. Content Type application/json 59.1.5. Responses Table 59.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1SummaryCountsResponse 0 An unexpected error response. RuntimeError 59.1.6. Samples 59.1.7. Common object reference 59.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 59.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 59.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 59.1.7.3. V1SummaryCountsResponse Field Name Required Nullable Type Description Format numAlerts String int64 numClusters String int64 numDeployments String int64 numImages String int64 numSecrets String int64 numNodes String int64
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/summaryservice
Chapter 1. Red Hat Cluster Suite Overview
Chapter 1. Red Hat Cluster Suite Overview Clustered systems provide reliability, scalability, and availability to critical production services. Using Red Hat Cluster Suite, you can create a cluster to suit your needs for performance, high availability, load balancing, scalability, file sharing, and economy. This chapter provides an overview of Red Hat Cluster Suite components and functions, and consists of the following sections: Section 1.1, "Cluster Basics" Section 1.2, "Red Hat Cluster Suite Introduction" Section 1.3, "Cluster Infrastructure" Section 1.4, "High-availability Service Management" Section 1.5, "Red Hat GFS" Section 1.6, "Cluster Logical Volume Manager" Section 1.7, "Global Network Block Device" Section 1.8, "Linux Virtual Server" Section 1.9, "Cluster Administration Tools" Section 1.10, "Linux Virtual Server Administration GUI" 1.1. Cluster Basics A cluster is two or more computers (called nodes or members ) that work together to perform a task. There are four major types of clusters: Storage High availability Load balancing High performance Storage clusters provide a consistent file system image across servers in a cluster, allowing the servers to simultaneously read and write to a single shared file system. A storage cluster simplifies storage administration by limiting the installation and patching of applications to one file system. Also, with a cluster-wide file system, a storage cluster eliminates the need for redundant copies of application data and simplifies backup and disaster recovery. Red Hat Cluster Suite provides storage clustering through Red Hat GFS. High-availability clusters provide continuous availability of services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Typically, services in a high-availability cluster read and write data (via read-write mounted file systems). Therefore, a high-availability cluster must maintain data integrity as one cluster node takes over control of a service from another cluster node. Node failures in a high-availability cluster are not visible from clients outside the cluster. (High-availability clusters are sometimes referred to as failover clusters.) Red Hat Cluster Suite provides high-availability clustering through its High-availability Service Management component. Load-balancing clusters dispatch network service requests to multiple cluster nodes to balance the request load among the cluster nodes. Load balancing provides cost-effective scalability because you can match the number of nodes according to load requirements. If a node in a load-balancing cluster becomes inoperative, the load-balancing software detects the failure and redirects requests to other cluster nodes. Node failures in a load-balancing cluster are not visible from clients outside the cluster. Red Hat Cluster Suite provides load-balancing through LVS (Linux Virtual Server). High-performance clusters use cluster nodes to perform concurrent calculations. A high-performance cluster allows applications to work in parallel, therefore enhancing the performance of the applications. (High performance clusters are also referred to as computational clusters or grid computing.) Note The cluster types summarized in the preceding text reflect basic configurations; your needs might require a combination of the clusters described.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/ch.gfscs.cluster-overview-cso
20.9. Removing and Deleting a Virtual Machine
20.9. Removing and Deleting a Virtual Machine 20.9.1. Undefining a Virtual Machine The virsh undefine domain [--managed-save] [ storage ] [--remove-all-storage] [--wipe-storage] [--snapshots-metadata] [--nvram] command undefines a domain. If domain is inactive, the configuration is removed completely. If the domain is active (running), it is converted to a transient domain. When the guest virtual machine becomes inactive, the configuration is removed completely. This command can take the following arguments: --managed-save - this argument guarantees that any managed save image is also cleaned up. Without using this argument, attempts to undefine a guest virtual machine with a managed save will fail. --snapshots-metadata - this argument guarantees that any snapshots (as shown with snapshot-list ) are also cleaned up when undefining an inactive guest virtual machine. Note that any attempts to undefine an inactive guest virtual machine with snapshot metadata will fail. If this argument is used and the guest virtual machine is active, it is ignored. --storage - using this argument requires a comma separated list of volume target names or source paths of storage volumes to be removed along with the undefined domain. This action will undefine the storage volume before it is removed. Note that this can only be done with inactive guest virtual machines and that this will only work with storage volumes that are managed by libvirt . --remove-all-storage - in addition to undefining the guest virtual machine, all associated storage volumes are deleted. If you want to delete the virtual machine, choose this option only if there are no other virtual machines using the same associated storage. An alternative way is with the virsh vol-delete . See Section 20.31, "Deleting Storage Volumes" for more information. --wipe-storage - in addition to deleting the storage volume, the contents are wiped. Example 20.17. How to delete a guest virtual machine and delete its storage volumes The following example undefines the guest1 virtual machine and remove all associated storage volumes. An undefined guest becomes transient and thus is deleted after it shuts down: # virsh undefine guest1 --remove-all-storage 20.9.2. Forcing a Guest Virtual Machine to Stop Note This command should only be used when you cannot shut down the virtual guest machine by any other method. The virsh destroy command initiates an immediate ungraceful shutdown and stops the specified guest virtual machine. Using virsh destroy can corrupt guest virtual machine file systems. Use the virsh destroy command only when the guest virtual machine is unresponsive. The virsh destroy command with the --graceful option attempts to flush the cache for the disk image file before powering off the virtual machine. Example 20.18. How to immediately shutdown a guest virtual machine with a hard shutdown The following example immediately shuts down the guest1 virtual machine, probably because it is unresponsive: # virsh destroy guest1 You may want to follow this with the virsh undefine command. See Example 20.17, "How to delete a guest virtual machine and delete its storage volumes"
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virsh-delete
14.6.3. WINS (Windows Internetworking Name Server)
14.6.3. WINS (Windows Internetworking Name Server) Either a Samba server or a Windows NT server can function as a WINS server. When a WINS server is used with NetBIOS enabled, UDP unicasts can be routed which allows name resolution across networks. Without a WINS server, the UDP broadcast is limited to the local subnet and therefore cannot be routed to other subnets, workgroups, or domains. If WINS replication is necessary, do not use Samba as your primary WINS server, as Samba does not currently support WINS replication. In a mixed NT/2000/2003 server and Samba environment, it is recommended that you use the Microsoft WINS capabilities. In a Samba-only environment, it is recommended that you use only one Samba server for WINS. The following is an example of the smb.conf file in which the Samba server is serving as a WINS server: Note All servers (including Samba) should connect to a WINS server to resolve NetBIOS names. Without WINS, browsing only occurs on the local subnet. Furthermore, even if a domain-wide list is somehow obtained, hosts are not resolvable for the client without WINS.
[ "[global] wins support = Yes" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-wins
5.3.9. Activating and Deactivating Volume Groups
5.3.9. Activating and Deactivating Volume Groups When you create a volume group it is, by default, activated. This means that the logical volumes in that group are accessible and subject to change. There are various circumstances for which you need to make a volume group inactive and thus unknown to the kernel. To deactivate or activate a volume group, use the -a ( --available ) argument of the vgchange command. The following example deactivates the volume group my_volume_group . If clustered locking is enabled, add 'e' to activate or deactivate a volume group exclusively on one node or 'l' to activate or/deactivate a volume group only on the local node. Logical volumes with single-host snapshots are always activated exclusively because they can only be used on one node at once. You can deactivate individual logical volumes with the lvchange command, as described in Section 5.4.10, "Changing the Parameters of a Logical Volume Group" , For information on activating logical volumes on individual nodes in a cluster, see Section 5.7, "Activating Logical Volumes on Individual Nodes in a Cluster" .
[ "vgchange -a n my_volume_group" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/vg_activate
Chapter 13. Low latency tuning
Chapter 13. Low latency tuning 13.1. Understanding low latency The emergence of Edge computing in the area of Telco / 5G plays a key role in reducing latency and congestion problems and improving application performance. Simply put, latency determines how fast data (packets) moves from the sender to receiver and returns to the sender after processing by the receiver. Maintaining a network architecture with the lowest possible delay of latency speeds is key for meeting the network performance requirements of 5G. Compared to 4G technology, with an average latency of 50 ms, 5G is targeted to reach latency numbers of 1 ms or less. This reduction in latency boosts wireless throughput by a factor of 10. Many of the deployed applications in the Telco space require low latency that can only tolerate zero packet loss. Tuning for zero packet loss helps mitigate the inherent issues that degrade network performance. For more information, see Tuning for Zero Packet Loss in Red Hat OpenStack Platform (RHOSP) . The Edge computing initiative also comes in to play for reducing latency rates. Think of it as being on the edge of the cloud and closer to the user. This greatly reduces the distance between the user and distant data centers, resulting in reduced application response times and performance latency. Administrators must be able to manage their many Edge sites and local services in a centralized way so that all of the deployments can run at the lowest possible management cost. They also need an easy way to deploy and configure certain nodes of their cluster for real-time low latency and high-performance purposes. Low latency nodes are useful for applications such as Cloud-native Network Functions (CNF) and Data Plane Development Kit (DPDK). OpenShift Container Platform currently provides mechanisms to tune software on an OpenShift Container Platform cluster for real-time running and low latency (around <20 microseconds reaction time). This includes tuning the kernel and OpenShift Container Platform set values, installing a kernel, and reconfiguring the machine. But this method requires setting up four different Operators and performing many configurations that, when done manually, is complex and could be prone to mistakes. OpenShift Container Platform uses the Node Tuning Operator to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator uses this performance profile configuration that makes it easier to make these changes in a more reliable way. The administrator can specify whether to update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolate CPUs for application containers to run the workloads. Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. OpenShift Container Platform also supports workload hints for the Node Tuning Operator that can tune the PerformanceProfile to meet the demands of different industry environments. Workload hints are available for highPowerConsumption (very low latency at the cost of increased power consumption) and realTime (priority given to optimum latency). A combination of true/false settings for these hints can be used to deal with application-specific workload profiles and requirements. Workload hints simplify the fine-tuning of performance to industry sector settings. Instead of a "one size fits all" approach, workload hints can cater to usage patterns such as placing priority on: Low latency Real-time capability Efficient use of power In an ideal world, all of those would be prioritized: in real life, some come at the expense of others. The Node Tuning Operator is now aware of the workload expectations and better able to meet the demands of the workload. The cluster admin can now specify into which use case that workload falls. The Node Tuning Operator uses the PerformanceProfile to fine tune the performance settings for the workload. The environment in which an application is operating influences its behavior. For a typical data center with no strict latency requirements, only minimal default tuning is needed that enables CPU partitioning for some high performance workload pods. For data centers and workloads where latency is a higher priority, measures are still taken to optimize power consumption. The most complicated cases are clusters close to latency-sensitive equipment such as manufacturing machinery and software-defined radios. This last class of deployment is often referred to as Far edge. For Far edge deployments, ultra-low latency is the ultimate priority, and is achieved at the expense of power management. In OpenShift Container Platform version 4.10 and versions, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance. Now this functionality is part of the Node Tuning Operator. 13.1.1. About hyperthreading for low latency and real-time applications Hyperthreading is an Intel processor technology that allows a physical CPU processor core to function as two logical cores, executing two independent threads simultaneously. Hyperthreading allows for better system throughput for certain workload types where parallel processing is beneficial. The default OpenShift Container Platform configuration expects hyperthreading to be enabled by default. For telecommunications applications, it is important to design your application infrastructure to minimize latency as much as possible. Hyperthreading can slow performance times and negatively affect throughput for compute intensive workloads that require low latency. Disabling hyperthreading ensures predictable performance and can decrease processing times for these workloads. Note Hyperthreading implementation and configuration differs depending on the hardware you are running OpenShift Container Platform on. Consult the relevant host hardware tuning information for more details of the hyperthreading implementation specific to that hardware. Disabling hyperthreading can increase the cost per core of the cluster. Additional resources Configuring hyperthreading for a cluster 13.2. Provisioning real-time and low latency workloads Many industries and organizations need extremely high performance computing and might require low and predictable latency, especially in the financial and telecommunications industries. For these industries, with their unique requirements, OpenShift Container Platform provides the Node Tuning Operator to implement automatic tuning to achieve low latency performance and consistent response time for OpenShift Container Platform applications. The cluster administrator can use this performance profile configuration to make these changes in a more reliable way. The administrator can specify whether to update the kernel to kernel-rt (real-time), reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, isolate CPUs for application containers to run the workloads, and disable unused CPUs to reduce power consumption. Warning The usage of execution probes in conjunction with applications that require guaranteed CPUs can cause latency spikes. It is recommended to use other probes, such as a properly configured set of network probes, as an alternative. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, these functions are part of the Node Tuning Operator. 13.2.1. Known limitations for real-time Note In most deployments, kernel-rt is supported only on worker nodes when you use a standard cluster with three control plane nodes and three worker nodes. There are exceptions for compact and single nodes on OpenShift Container Platform deployments. For installations on a single node, kernel-rt is supported on the single control plane node. To fully utilize the real-time mode, the containers must run with elevated privileges. See Set capabilities for a Container for information on granting privileges. OpenShift Container Platform restricts the allowed capabilities, so you might need to create a SecurityContext as well. Note This procedure is fully supported with bare metal installations using Red Hat Enterprise Linux CoreOS (RHCOS) systems. Establishing the right performance expectations refers to the fact that the real-time kernel is not a panacea. Its objective is consistent, low-latency determinism offering predictable response times. There is some additional kernel overhead associated with the real-time kernel. This is due primarily to handling hardware interruptions in separately scheduled threads. The increased overhead in some workloads results in some degradation in overall throughput. The exact amount of degradation is very workload dependent, ranging from 0% to 30%. However, it is the cost of determinism. 13.2.2. Provisioning a worker with real-time capabilities Optional: Add a node to the OpenShift Container Platform cluster. See Setting BIOS parameters for system tuning . Add the label worker-rt to the worker nodes that require the real-time capability by using the oc command. Create a new machine config pool for real-time nodes: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-rt labels: machineconfiguration.openshift.io/role: worker-rt spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-rt], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-rt: "" Note that a machine config pool worker-rt is created for group of nodes that have the label worker-rt . Add the node to the proper machine config pool by using node role labels. Note You must decide which nodes are configured with real-time workloads. You could configure all of the nodes in the cluster, or a subset of the nodes. The Node Tuning Operator that expects all of the nodes are part of a dedicated machine config pool. If you use all of the nodes, you must point the Node Tuning Operator to the worker node role label. If you use a subset, you must group the nodes into a new machine config pool. Create the PerformanceProfile with the proper set of housekeeping cores and realTimeKernel: enabled: true . You must set machineConfigPoolSelector in PerformanceProfile : apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: ... realTimeKernel: enabled: true nodeSelector: node-role.kubernetes.io/worker-rt: "" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-rt Verify that a matching machine config pool exists with a label: USD oc describe mcp/worker-rt Example output Name: worker-rt Namespace: Labels: machineconfiguration.openshift.io/role=worker-rt OpenShift Container Platform will start configuring the nodes, which might involve multiple reboots. Wait for the nodes to settle. This can take a long time depending on the specific hardware you use, but 20 minutes per node is expected. Verify everything is working as expected. 13.2.3. Verifying the real-time kernel installation Use this command to verify that the real-time kernel is installed: USD oc get node -o wide Note the worker with the role worker-rt that contains the string 4.18.0-305.30.1.rt7.102.el8_4.x86_64 cri-o://1.25.0-99.rhaos4.10.gitc3131de.el8 : NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME rt-worker-0.example.com Ready worker,worker-rt 5d17h v1.25.0 128.66.135.107 <none> Red Hat Enterprise Linux CoreOS 46.82.202008252340-0 (Ootpa) 4.18.0-305.30.1.rt7.102.el8_4.x86_64 cri-o://1.25.0-99.rhaos4.10.gitc3131de.el8 [...] 13.2.4. Creating a workload that works in real-time Use the following procedures for preparing a workload that will use real-time capabilities. Procedure Create a pod with a QoS class of Guaranteed . Optional: Disable CPU load balancing for DPDK. Assign a proper node selector. When writing your applications, follow the general recommendations described in Application tuning and deployment . 13.2.5. Creating a pod with a QoS class of Guaranteed Keep the following in mind when you create a pod that is given a QoS class of Guaranteed : Every container in the pod must have a memory limit and a memory request, and they must be the same. Every container in the pod must have a CPU limit and a CPU request, and they must be the same. The following example shows the configuration file for a pod that has one container. The container has a memory limit and a memory request, both equal to 200 MiB. The container has a CPU limit and a CPU request, both equal to 1 CPU. apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: "200Mi" cpu: "1" requests: memory: "200Mi" cpu: "1" Create the pod: USD oc apply -f qos-pod.yaml --namespace=qos-example View detailed information about the pod: USD oc get pod qos-demo --namespace=qos-example --output=yaml Example output spec: containers: ... status: qosClass: Guaranteed Note If a container specifies its own memory limit, but does not specify a memory request, OpenShift Container Platform automatically assigns a memory request that matches the limit. Similarly, if a container specifies its own CPU limit, but does not specify a CPU request, OpenShift Container Platform automatically assigns a CPU request that matches the limit. 13.2.6. Optional: Disabling CPU load balancing for DPDK Functionality to disable or enable CPU load balancing is implemented on the CRI-O level. The code under the CRI-O disables or enables CPU load balancing only when the following requirements are met. The pod must use the performance-<profile-name> runtime class. You can get the proper name by looking at the status of the performance profile, as shown here: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile ... status: ... runtimeClass: performance-manual Note Currently, disabling CPU load balancing is not supported with cgroup v2. The Node Tuning Operator is responsible for the creation of the high-performance runtime handler config snippet under relevant nodes and for creation of the high-performance runtime class under the cluster. It will have the same content as default runtime handler except it enables the CPU load balancing configuration functionality. To disable the CPU load balancing for the pod, the Pod specification must include the following fields: apiVersion: v1 kind: Pod metadata: ... annotations: ... cpu-load-balancing.crio.io: "disable" ... ... spec: ... runtimeClassName: performance-<profile_name> ... Note Only disable CPU load balancing when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU load balancing can affect the performance of other containers in the cluster. 13.2.7. Assigning a proper node selector The preferred way to assign a pod to nodes is to use the same node selector the performance profile used, as shown here: apiVersion: v1 kind: Pod metadata: name: example spec: # ... nodeSelector: node-role.kubernetes.io/worker-rt: "" For more information, see Placing pods on specific nodes using node selectors . 13.2.8. Scheduling a workload onto a worker with real-time capabilities Use label selectors that match the nodes attached to the machine config pool that was configured for low latency by the Node Tuning Operator. For more information, see Assigning pods to nodes . 13.2.9. Reducing power consumption by taking CPUs offline You can generally anticipate telecommunication workloads. When not all of the CPU resources are required, the Node Tuning Operator allows you take unused CPUs offline to reduce power consumption by manually updating the performance profile. To take unused CPUs offline, you must perform the following tasks: Set the offline CPUs in the performance profile and save the contents of the YAML file: Example performance profile with offlined CPUs apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - intel_idle.max_cstate=0 - idle=poll cpu: isolated: "2-23,26-47" reserved: "0,1,24,25" offlined: "48-59" 1 nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: single-numa-node realTimeKernel: enabled: true 1 Optional. You can list CPUs in the offlined field to take the specified CPUs offline. Apply the updated profile by running the following command: USD oc apply -f my-performance-profile.yaml 13.2.10. Optional: Power saving configurations You can enable power savings for a node that has low priority workloads that are colocated with high priority workloads without impacting the latency or throughput of the high priority workloads. Power saving is possible without modifications to the workloads themselves. Important The feature is supported on Intel Ice Lake and later generations of Intel CPUs. The capabilities of the processor might impact the latency and throughput of the high priority workloads. When you configure a node with a power saving configuration, you must configure high priority workloads with performance configuration at the pod level, which means that the configuration applies to all the cores used by the pod. By disabling P-states and C-states at the pod level, you can configure high priority workloads for best performance and lowest latency. Table 13.1. Configuration for high priority workloads Annotation Description annotations: cpu-c-states.crio.io: "disable" cpu-freq-governor.crio.io: "<governor>" Provides the best performance for a pod by disabling C-states and specifying the governor type for CPU scaling. The performance governor is recommended for high priority workloads. Prerequisites You enabled C-states and OS-controlled P-states in the BIOS Procedure Generate a PerformanceProfile with per-pod-power-management set to true : USD podman run --entrypoint performance-profile-creator -v \ /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.12 \ --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true \ --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node \ --must-gather-dir-path /must-gather -power-consumption-mode=low-latency \ 1 --per-pod-power-management=true > my-performance-profile.yaml 1 The power-consumption-mode must be default or low-latency when the per-pod-power-management is set to true . Example PerformanceProfile with perPodPowerManagement apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: [.....] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true Set the default cpufreq governor as an additional kernel argument in the PerformanceProfile custom resource (CR): apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: ... additionalKernelArgs: - cpufreq.default_governor=schedutil 1 1 Using the schedutil governor is recommended, however, you can use other governors such as the ondemand or powersave governors. Set the maximum CPU frequency in the TunedPerformancePatch CR: spec: profile: - data: | [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct = <x> 1 1 The max_perf_pct controls the maximum frequency the cpufreq driver is allowed to set as a percentage of the maximum supported cpu frequency. This value applies to all CPUs. You can check the maximum supported frequency in /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq . As a starting point, you can use a percentage that caps all CPUs at the All Cores Turbo frequency. The All Cores Turbo frequency is the frequency that all cores will run at when the cores are all fully occupied. Add the desired annotations to your high priority workload pods. The annotations override the default settings. Example high priority workload annotation apiVersion: v1 kind: Pod metadata: ... annotations: ... cpu-c-states.crio.io: "disable" cpu-freq-governor.crio.io: "<governor>" ... ... spec: ... runtimeClassName: performance-<profile_name> ... Restart the pods. Additional resources Recommended firmware configuration for vDU cluster hosts . Placing pods on specific nodes using node selectors . 13.2.11. Managing device interrupt processing for guaranteed pod isolated CPUs The Node Tuning Operator can manage host CPUs by dividing them into reserved CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolated CPUs for application containers to run the workloads. This allows you to set CPUs for low latency workloads as isolated. Device interrupts are load balanced between all isolated and reserved CPUs to avoid CPUs being overloaded, with the exception of CPUs where there is a guaranteed pod running. Guaranteed pod CPUs are prevented from processing device interrupts when the relevant annotations are set for the pod. In the performance profile, globallyDisableIrqLoadBalancing is used to manage whether device interrupts are processed or not. For certain workloads, the reserved CPUs are not always sufficient for dealing with device interrupts, and for this reason, device interrupts are not globally disabled on the isolated CPUs. By default, Node Tuning Operator does not disable device interrupts on isolated CPUs. To achieve low latency for workloads, some (but not all) pods require the CPUs they are running on to not process device interrupts. A pod annotation, irq-load-balancing.crio.io , is used to define whether device interrupts are processed or not. When configured, CRI-O disables device interrupts only as long as the pod is running. 13.2.11.1. Disabling CPU CFS quota To reduce CPU throttling for individual guaranteed pods, create a pod specification with the annotation cpu-quota.crio.io: "disable" . This annotation disables the CPU completely fair scheduler (CFS) quota at the pod run time. The following pod specification contains this annotation: apiVersion: v1 kind: Pod metadata: annotations: cpu-quota.crio.io: "disable" spec: runtimeClassName: performance-<profile_name> ... Note Only disable CPU CFS quota when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU CFS quota can affect the performance of other containers in the cluster. 13.2.11.2. Disabling global device interrupts handling in Node Tuning Operator To configure Node Tuning Operator to disable global device interrupts for the isolated CPU set, set the globallyDisableIrqLoadBalancing field in the performance profile to true . When true , conflicting pod annotations are ignored. When false , IRQ loads are balanced across all CPUs. A performance profile snippet illustrates this setting: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: globallyDisableIrqLoadBalancing: true ... 13.2.11.3. Disabling interrupt processing for individual pods To disable interrupt processing for individual pods, ensure that globallyDisableIrqLoadBalancing is set to false in the performance profile. Then, in the pod specification, set the irq-load-balancing.crio.io pod annotation to disable . The following pod specification contains this annotation: apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: irq-load-balancing.crio.io: "disable" spec: runtimeClassName: performance-<profile_name> ... 13.2.12. Upgrading the performance profile to use device interrupt processing When you upgrade the Node Tuning Operator performance profile custom resource definition (CRD) from v1 or v1alpha1 to v2, globallyDisableIrqLoadBalancing is set to true on existing profiles. Note globallyDisableIrqLoadBalancing toggles whether IRQ load balancing will be disabled for the Isolated CPU set. When the option is set to true it disables IRQ load balancing for the Isolated CPU set. Setting the option to false allows the IRQs to be balanced across all CPUs. 13.2.12.1. Supported API Versions The Node Tuning Operator supports v2 , v1 , and v1alpha1 for the performance profile apiVersion field. The v1 and v1alpha1 APIs are identical. The v2 API includes an optional boolean field globallyDisableIrqLoadBalancing with a default value of false . 13.2.12.1.1. Upgrading Node Tuning Operator API from v1alpha1 to v1 When upgrading Node Tuning Operator API version from v1alpha1 to v1, the v1alpha1 performance profiles are converted on-the-fly using a "None" Conversion strategy and served to the Node Tuning Operator with API version v1. 13.2.12.1.2. Upgrading Node Tuning Operator API from v1alpha1 or v1 to v2 When upgrading from an older Node Tuning Operator API version, the existing v1 and v1alpha1 performance profiles are converted using a conversion webhook that injects the globallyDisableIrqLoadBalancing field with a value of true . 13.3. Tuning nodes for low latency with the performance profile The performance profile lets you control latency tuning aspects of nodes that belong to a certain machine config pool. After you specify your settings, the PerformanceProfile object is compiled into multiple objects that perform the actual node level tuning: A MachineConfig file that manipulates the nodes. A KubeletConfig file that configures the Topology Manager, the CPU Manager, and the OpenShift Container Platform nodes. The Tuned profile that configures the Node Tuning Operator. You can use a performance profile to specify whether to update the kernel to kernel-rt, to allocate huge pages, and to partition the CPUs for performing housekeeping duties or running workloads. Note You can manually create the PerformanceProfile object or use the Performance Profile Creator (PPC) to generate a performance profile. See the additional resources below for more information on the PPC. Sample performance profile apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: "4-15" 1 reserved: "0-3" 2 hugepages: defaultHugepagesSize: "1G" pages: - size: "1G" count: 16 node: 0 realTimeKernel: enabled: true 3 numa: 4 topologyPolicy: "best-effort" nodeSelector: node-role.kubernetes.io/worker-cnf: "" 5 1 Use this field to isolate specific CPUs to use with application containers for workloads. Set an even number of isolated CPUs to enable the pods to run without errors when hyperthreading is enabled. 2 Use this field to reserve specific CPUs to use with infra containers for housekeeping. 3 Use this field to install the real-time kernel on the node. Valid values are true or false . Setting the true value installs the real-time kernel. 4 Use this field to configure the topology manager policy. Valid values are none (default), best-effort , restricted , and single-numa-node . For more information, see Topology Manager Policies . 5 Use this field to specify a node selector to apply the performance profile to specific nodes. Additional resources For information on using the Performance Profile Creator (PPC) to generate a performance profile, see Creating a performance profile . 13.3.1. Configuring huge pages Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. Use the Node Tuning Operator to allocate huge pages on a specific node. OpenShift Container Platform provides a method for creating and allocating huge pages. Node Tuning Operator provides an easier method for doing this using the performance profile. For example, in the hugepages pages section of the performance profile, you can specify multiple blocks of size , count , and, optionally, node : hugepages: defaultHugepagesSize: "1G" pages: - size: "1G" count: 4 node: 0 1 1 node is the NUMA node in which the huge pages are allocated. If you omit node , the pages are evenly spread across all NUMA nodes. Note Wait for the relevant machine config pool status that indicates the update is finished. These are the only configuration steps you need to do to allocate huge pages. Verification To verify the configuration, see the /proc/meminfo file on the node: USD oc debug node/ip-10-0-141-105.ec2.internal # grep -i huge /proc/meminfo Example output AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ## Use oc describe to report the new size: USD oc describe node worker-0.ocp4poc.example.com | grep -i huge Example output hugepages-1g=true hugepages-###: ### hugepages-###: ### 13.3.2. Allocating multiple huge page sizes You can request huge pages with different sizes under the same container. This allows you to define more complicated pods consisting of containers with different huge page size needs. For example, you can define sizes 1G and 2M and the Node Tuning Operator will configure both sizes on the node, as shown here: spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G 13.3.3. Configuring a node for IRQ dynamic load balancing Configure a cluster node for IRQ dynamic load balancing to control which cores can receive device interrupt requests (IRQ). Prerequisites For core isolation, all server hardware components must support IRQ affinity. To check if the hardware components of your server support IRQ affinity, view the server's hardware specifications or contact your hardware provider. Procedure Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Set the performance profile apiVersion to use performance.openshift.io/v2 . Remove the globallyDisableIrqLoadBalancing field or set it to false . Set the appropriate isolated and reserved CPUs. The following snippet illustrates a profile that reserves 2 CPUs. IRQ load-balancing is enabled for pods running on the isolated CPU set: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: dynamic-irq-profile spec: cpu: isolated: 2-5 reserved: 0-1 ... Note When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs. Create the pod that uses exclusive CPUs, and set irq-load-balancing.crio.io and cpu-quota.crio.io annotations to disable . For example: apiVersion: v1 kind: Pod metadata: name: dynamic-irq-pod annotations: irq-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" spec: containers: - name: dynamic-irq-pod image: "registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12" command: ["sleep", "10h"] resources: requests: cpu: 2 memory: "200M" limits: cpu: 2 memory: "200M" nodeSelector: node-role.kubernetes.io/worker-cnf: "" runtimeClassName: performance-dynamic-irq-profile ... Enter the pod runtimeClassName in the form performance-<profile_name>, where <profile_name> is the name from the PerformanceProfile YAML, in this example, performance-dynamic-irq-profile . Set the node selector to target a cnf-worker. Ensure the pod is running correctly. Status should be running , and the correct cnf-worker node should be set: USD oc get pod -o wide Expected output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dynamic-irq-pod 1/1 Running 0 5h33m <ip-address> <node-name> <none> <none> Get the CPUs that the pod configured for IRQ dynamic load balancing runs on: USD oc exec -it dynamic-irq-pod -- /bin/bash -c "grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'" Expected output Cpus_allowed_list: 2-3 Ensure the node configuration is applied correctly. Log in to the node to verify the configuration. USD oc debug node/<node-name> Expected output Starting pod/<node-name>-debug ... To use host binaries, run `chroot /host` Pod IP: <ip-address> If you don't see a command prompt, try pressing enter. sh-4.4# Verify that you can use the node file system: sh-4.4# chroot /host Expected output sh-4.4# Ensure the default system CPU affinity mask does not include the dynamic-irq-pod CPUs, for example, CPUs 2 and 3. USD cat /proc/irq/default_smp_affinity Example output 33 Ensure the system IRQs are not configured to run on the dynamic-irq-pod CPUs: find /proc/irq/ -name smp_affinity_list -exec sh -c 'i="USD1"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \; Example output /proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5 13.3.4. About support of IRQ affinity setting Some IRQ controllers lack support for IRQ affinity setting and will always expose all online CPUs as the IRQ mask. These IRQ controllers effectively run on CPU 0. The following are examples of drivers and hardware that Red Hat are aware lack support for IRQ affinity setting. The list is, by no means, exhaustive: Some RAID controller drivers, such as megaraid_sas Many non-volatile memory express (NVMe) drivers Some LAN on motherboard (LOM) network controllers The driver uses managed_irqs Note The reason they do not support IRQ affinity setting might be associated with factors such as the type of processor, the IRQ controller, or the circuitry connections in the motherboard. If the effective affinity of any IRQ is set to an isolated CPU, it might be a sign of some hardware or driver not supporting IRQ affinity setting. To find the effective affinity, log in to the host and run the following command: USD find /proc/irq -name effective_affinity -printf "%p: " -exec cat {} \; Example output /proc/irq/0/effective_affinity: 1 /proc/irq/1/effective_affinity: 8 /proc/irq/2/effective_affinity: 0 /proc/irq/3/effective_affinity: 1 /proc/irq/4/effective_affinity: 2 /proc/irq/5/effective_affinity: 1 /proc/irq/6/effective_affinity: 1 /proc/irq/7/effective_affinity: 1 /proc/irq/8/effective_affinity: 1 /proc/irq/9/effective_affinity: 2 /proc/irq/10/effective_affinity: 1 /proc/irq/11/effective_affinity: 1 /proc/irq/12/effective_affinity: 4 /proc/irq/13/effective_affinity: 1 /proc/irq/14/effective_affinity: 1 /proc/irq/15/effective_affinity: 1 /proc/irq/24/effective_affinity: 2 /proc/irq/25/effective_affinity: 4 /proc/irq/26/effective_affinity: 2 /proc/irq/27/effective_affinity: 1 /proc/irq/28/effective_affinity: 8 /proc/irq/29/effective_affinity: 4 /proc/irq/30/effective_affinity: 4 /proc/irq/31/effective_affinity: 8 /proc/irq/32/effective_affinity: 8 /proc/irq/33/effective_affinity: 1 /proc/irq/34/effective_affinity: 2 Some drivers use managed_irqs , whose affinity is managed internally by the kernel and userspace cannot change the affinity. In some cases, these IRQs might be assigned to isolated CPUs. For more information about managed_irqs , see Affinity of managed interrupts cannot be changed even if they target isolated CPU . 13.3.5. Configuring hyperthreading for a cluster To configure hyperthreading for an OpenShift Container Platform cluster, set the CPU threads in the performance profile to the same cores that are configured for the reserved or isolated CPU pools. Note If you configure a performance profile, and subsequently change the hyperthreading configuration for the host, ensure that you update the CPU isolated and reserved fields in the PerformanceProfile YAML to match the new configuration. Warning Disabling a previously enabled host hyperthreading configuration can cause the CPU core IDs listed in the PerformanceProfile YAML to be incorrect. This incorrect configuration can cause the node to become unavailable because the listed CPUs can no longer be found. Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI (oc). Procedure Ascertain which threads are running on what CPUs for the host you want to configure. You can view which threads are running on the host CPUs by logging in to the cluster and running the following command: USD lscpu --all --extended Example output CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000 In this example, there are eight logical CPU cores running on four physical CPU cores. CPU0 and CPU4 are running on physical Core0, CPU1 and CPU5 are running on physical Core 1, and so on. Alternatively, to view the threads that are set for a particular physical CPU core ( cpu0 in the example below), open a command prompt and run the following: USD cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list Example output 0-4 Apply the isolated and reserved CPUs in the PerformanceProfile YAML. For example, you can set logical cores CPU0 and CPU4 as isolated , and logical cores CPU1 to CPU3 and CPU5 to CPU7 as reserved . When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs. ... cpu: isolated: 0,4 reserved: 1-3,5-7 ... Note The reserved and isolated CPU pools must not overlap and together must span all available cores in the worker node. Important Hyperthreading is enabled by default on most Intel processors. If you enable hyperthreading, all threads processed by a particular core must be isolated or processed on the same core. 13.3.5.1. Disabling hyperthreading for low latency applications When configuring clusters for low latency processing, consider whether you want to disable hyperthreading before you deploy the cluster. To disable hyperthreading, do the following: Create a performance profile that is appropriate for your hardware and topology. Set nosmt as an additional kernel argument. The following example performance profile illustrates this setting: \ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - nosmt cpu: isolated: 2-3 reserved: 0-1 hugepages: defaultHugepagesSize: 1G pages: - count: 2 node: 0 size: 1G nodeSelector: node-role.kubernetes.io/performance: '' realTimeKernel: enabled: true Note When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs. 13.3.6. Understanding workload hints The following table describes how combinations of power consumption and real-time settings impact on latency. Note The following workload hints can be configured manually. You can also work with workload hints using the Performance Profile Creator. For more information about the performance profile, see the "Creating a performance profile" section. If the workload hint is configured manually and the realTime workload hint is not explicitly set then it defaults to true . Performance Profile creator setting Hint Environment Description Default workloadHints: highPowerConsumption: false realTime: false High throughput cluster without latency requirements Performance achieved through CPU partitioning only. Low-latency workloadHints: highPowerConsumption: false realTime: true Regional datacenters Both energy savings and low-latency are desirable: compromise between power management, latency and throughput. Ultra-low-latency workloadHints: highPowerConsumption: true realTime: true Far edge clusters, latency critical workloads Optimized for absolute minimal latency and maximum determinism at the cost of increased power consumption. Per-pod power management workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true Critical and non-critical workloads Allows for power management per pod. Additional resources For information about using the Performance Profile Creator (PPC) to generate a performance profile, see Creating a performance profile . 13.3.7. Configuring workload hints manually Procedure Create a PerformanceProfile appropriate for the environment's hardware and topology as described in the table in "Understanding workload hints". Adjust the profile to match the expected workload. In this example, we tune for the lowest possible latency. Add the highPowerConsumption and realTime workload hints. Both are set to true here. apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: workload-hints spec: ... workloadHints: highPowerConsumption: true 1 realTime: true 2 1 If highPowerConsumption is true , the node is tuned for very low latency at the cost of increased power consumption. 2 Disables some debugging and monitoring features that can affect system latency. Note When the realTime workload hint flag is set to true in a performance profile, add the cpu-quota.crio.io: disable annotation to every guaranteed pod with pinned CPUs. This annotation is necessary to prevent the degradation of the process performance within the pod. If the realTime workload hint is not explicitly set then it defaults to true . Additional resources For information about reducing CPU throttling for individual guaranteed pods, see Disabling CPU CFS quota . 13.3.8. Restricting CPUs for infra and application containers Generic housekeeping and workload tasks use CPUs in a way that may impact latency-sensitive processes. By default, the container runtime uses all online CPUs to run all containers together, which can result in context switches and spikes in latency. Partitioning the CPUs prevents noisy processes from interfering with latency-sensitive processes by separating them from each other. The following table describes how processes run on a CPU after you have tuned the node using the Node Tuning Operator: Table 13.2. Process' CPU assignments Process type Details Burstable and BestEffort pods Runs on any CPU except where low latency workload is running Infrastructure pods Runs on any CPU except where low latency workload is running Interrupts Redirects to reserved CPUs (optional in OpenShift Container Platform 4.7 and later) Kernel processes Pins to reserved CPUs Latency-sensitive workload pods Pins to a specific set of exclusive CPUs from the isolated pool OS processes/systemd services Pins to reserved CPUs The allocatable capacity of cores on a node for pods of all QoS process types, Burstable , BestEffort , or Guaranteed , is equal to the capacity of the isolated pool. The capacity of the reserved pool is removed from the node's total core capacity for use by the cluster and operating system housekeeping duties. Example 1 A node features a capacity of 100 cores. Using a performance profile, the cluster administrator allocates 50 cores to the isolated pool and 50 cores to the reserved pool. The cluster administrator assigns 25 cores to QoS Guaranteed pods and 25 cores for BestEffort or Burstable pods. This matches the capacity of the isolated pool. Example 2 A node features a capacity of 100 cores. Using a performance profile, the cluster administrator allocates 50 cores to the isolated pool and 50 cores to the reserved pool. The cluster administrator assigns 50 cores to QoS Guaranteed pods and one core for BestEffort or Burstable pods. This exceeds the capacity of the isolated pool by one core. Pod scheduling fails because of insufficient CPU capacity. The exact partitioning pattern to use depends on many factors like hardware, workload characteristics and the expected system load. Some sample use cases are as follows: If the latency-sensitive workload uses specific hardware, such as a network interface controller (NIC), ensure that the CPUs in the isolated pool are as close as possible to this hardware. At a minimum, you should place the workload in the same Non-Uniform Memory Access (NUMA) node. The reserved pool is used for handling all interrupts. When depending on system networking, allocate a sufficiently-sized reserve pool to handle all the incoming packet interrupts. In 4.12 and later versions, workloads can optionally be labeled as sensitive. The decision regarding which specific CPUs should be used for reserved and isolated partitions requires detailed analysis and measurements. Factors like NUMA affinity of devices and memory play a role. The selection also depends on the workload architecture and the specific use case. Important The reserved and isolated CPU pools must not overlap and together must span all available cores in the worker node. To ensure that housekeeping tasks and workloads do not interfere with each other, specify two groups of CPUs in the spec section of the performance profile. isolated - Specifies the CPUs for the application container workloads. These CPUs have the lowest latency. Processes in this group have no interruptions and can, for example, reach much higher DPDK zero packet loss bandwidth. reserved - Specifies the CPUs for the cluster and operating system housekeeping duties. Threads in the reserved group are often busy. Do not run latency-sensitive applications in the reserved group. Latency-sensitive applications run in the isolated group. Procedure Create a performance profile appropriate for the environment's hardware and topology. Add the reserved and isolated parameters with the CPUs you want reserved and isolated for the infra and application containers: \ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: "0-4,9" 1 isolated: "5-8" 2 nodeSelector: 3 node-role.kubernetes.io/worker: "" 1 Specify which CPUs are for infra containers to perform cluster and operating system housekeeping duties. 2 Specify which CPUs are for application containers to run workloads. 3 Optional: Specify a node selector to apply the performance profile to specific nodes. Additional resources Managing device interrupt processing for guaranteed pod isolated CPUs Create a pod that gets assigned a QoS class of Guaranteed 13.4. Reducing NIC queues using the Node Tuning Operator The Node Tuning Operator allows you to adjust the network interface controller (NIC) queue count for each network device. By using a PerformanceProfile, the amount of queues can be reduced to the number of reserved CPUs. 13.4.1. Adjusting the NIC queues with the performance profile The performance profile lets you adjust the queue count for each network device. Supported network devices: Non-virtual network devices Network devices that support multiple queues (channels) Unsupported network devices: Pure software network interfaces Block devices Intel DPDK virtual functions Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform cluster running the Node Tuning Operator as a user with cluster-admin privileges. Create and apply a performance profile appropriate for your hardware and topology. For guidance on creating a profile, see the "Creating a performance profile" section. Edit this created performance profile: USD oc edit -f <your_profile_name>.yaml Populate the spec field with the net object. The object list can contain two fields: userLevelNetworking is a required field specified as a boolean flag. If userLevelNetworking is true , the queue count is set to the reserved CPU count for all supported devices. The default is false . devices is an optional field specifying a list of devices that will have the queues set to the reserved CPU count. If the device list is empty, the configuration applies to all network devices. The configuration is as follows: interfaceName : This field specifies the interface name, and it supports shell-style wildcards, which can be positive or negative. Example wildcard syntax is as follows: <string> .* Negative rules are prefixed with an exclamation mark. To apply the net queue changes to all devices other than the excluded list, use !<device> , for example, !eno1 . vendorID : The network device vendor ID represented as a 16-bit hexadecimal number with a 0x prefix. deviceID : The network device ID (model) represented as a 16-bit hexadecimal number with a 0x prefix. Note When a deviceID is specified, the vendorID must also be defined. A device that matches all of the device identifiers specified in a device entry interfaceName , vendorID , or a pair of vendorID plus deviceID qualifies as a network device. This network device then has its net queues count set to the reserved CPU count. When two or more devices are specified, the net queues count is set to any net device that matches one of them. Set the queue count to the reserved CPU count for all devices by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices matching any of the defined device identifiers by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "eth0" - interfaceName: "eth1" - vendorID: "0x1af4" deviceID: "0x1000" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices starting with the interface name eth by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "eth*" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices with an interface named anything other than eno1 by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "!eno1" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices that have an interface name eth0 , vendorID of 0x1af4 , and deviceID of 0x1000 by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "eth0" - vendorID: "0x1af4" deviceID: "0x1000" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Apply the updated performance profile: USD oc apply -f <your_profile_name>.yaml Additional resources Creating a performance profile . 13.4.2. Verifying the queue status In this section, a number of examples illustrate different performance profiles and how to verify the changes are applied. Example 1 In this example, the net queue count is set to the reserved CPU count (2) for all supported devices. The relevant section from the performance profile is: apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true # ... Display the status of the queues associated with a device using the following command: Note Run this command on the node where the performance profile was applied. USD ethtool -l <device> Verify the queue status before the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4 Verify the queue status after the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1 1 The combined channel shows that the total count of reserved CPUs for all supported devices is 2. This matches what is configured in the performance profile. Example 2 In this example, the net queue count is set to the reserved CPU count (2) for all supported network devices with a specific vendorID . The relevant section from the performance profile is: apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - vendorID = 0x1af4 # ... Display the status of the queues associated with a device using the following command: Note Run this command on the node where the performance profile was applied. USD ethtool -l <device> Verify the queue status after the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1 1 The total count of reserved CPUs for all supported devices with vendorID=0x1af4 is 2. For example, if there is another network device ens2 with vendorID=0x1af4 it will also have total net queues of 2. This matches what is configured in the performance profile. Example 3 In this example, the net queue count is set to the reserved CPU count (2) for all supported network devices that match any of the defined device identifiers. The command udevadm info provides a detailed report on a device. In this example the devices are: # udevadm info -p /sys/class/net/ens4 ... E: ID_MODEL_ID=0x1000 E: ID_VENDOR_ID=0x1af4 E: INTERFACE=ens4 ... # udevadm info -p /sys/class/net/eth0 ... E: ID_MODEL_ID=0x1002 E: ID_VENDOR_ID=0x1001 E: INTERFACE=eth0 ... Set the net queues to 2 for a device with interfaceName equal to eth0 and any devices that have a vendorID=0x1af4 with the following performance profile: apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - interfaceName = eth0 - vendorID = 0x1af4 ... Verify the queue status after the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1 1 The total count of reserved CPUs for all supported devices with vendorID=0x1af4 is set to 2. For example, if there is another network device ens2 with vendorID=0x1af4 , it will also have the total net queues set to 2. Similarly, a device with interfaceName equal to eth0 will have total net queues set to 2. 13.4.3. Logging associated with adjusting NIC queues Log messages detailing the assigned devices are recorded in the respective Tuned daemon logs. The following messages might be recorded to the /var/log/tuned/tuned.log file: An INFO message is recorded detailing the successfully assigned devices: INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3 A WARNING message is recorded if none of the devices can be assigned: WARNING tuned.plugins.base: instance net_test: no matching devices available 13.5. Debugging low latency CNF tuning status The PerformanceProfile custom resource (CR) contains status fields for reporting tuning status and debugging latency degradation issues. These fields report on conditions that describe the state of the operator's reconciliation functionality. A typical issue can arise when the status of machine config pools that are attached to the performance profile are in a degraded state, causing the PerformanceProfile status to degrade. In this case, the machine config pool issues a failure message. The Node Tuning Operator contains the performanceProfile.spec.status.Conditions status field: Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded The Status field contains Conditions that specify Type values that indicate the status of the performance profile: Available All machine configs and Tuned profiles have been created successfully and are available for cluster components are responsible to process them (NTO, MCO, Kubelet). Upgradeable Indicates whether the resources maintained by the Operator are in a state that is safe to upgrade. Progressing Indicates that the deployment process from the performance profile has started. Degraded Indicates an error if: Validation of the performance profile has failed. Creation of all relevant components did not complete successfully. Each of these types contain the following fields: Status The state for the specific type ( true or false ). Timestamp The transaction timestamp. Reason string The machine readable reason. Message string The human readable reason describing the state and error details, if any. 13.5.1. Machine config pools A performance profile and its created products are applied to a node according to an associated machine config pool (MCP). The MCP holds valuable information about the progress of applying the machine configurations created by performance profiles that encompass kernel args, kube config, huge pages allocation, and deployment of rt-kernel. The Performance Profile controller monitors changes in the MCP and updates the performance profile status accordingly. The only conditions returned by the MCP to the performance profile status is when the MCP is Degraded , which leads to performanceProfile.status.condition.Degraded = true . Example The following example is for a performance profile with an associated machine config pool ( worker-cnf ) that was created for it: The associated machine config pool is in a degraded state: # oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h The describe section of the MCP shows the reason: # oc describe mcp worker-cnf Example output Message: Node node-worker-cnf is reporting: "prepping update: machineconfig.machineconfiguration.openshift.io \"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\" not found" Reason: 1 nodes are reporting degraded status on sync The degraded state should also appear under the performance profile status field marked as degraded = true : # oc describe performanceprofiles performance Example output Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: "prepping update: machineconfig.machineconfiguration.openshift.io \"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\" not found". Reason: MCPDegraded Status: True Type: Degraded 13.6. Collecting low latency tuning debugging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including node tuning, NUMA topology, and other information needed to debug issues with low latency setup. For prompt support, supply diagnostic information for both OpenShift Container Platform and low latency tuning. 13.6.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, such as: Resource definitions Audit logs Service logs You can specify one or more images when you run the command by including the --image argument. When you specify an image, the tool collects data related to that feature or product. When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in your current working directory. 13.6.2. About collecting low latency tuning data Use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with low latency tuning, including: The Node Tuning Operator namespaces and child objects. MachineConfigPool and associated MachineConfig objects. The Node Tuning Operator and associated Tuned objects. Linux Kernel command line options. CPU and NUMA topology Basic PCI device information and NUMA locality. To collect debugging information with must-gather , you must specify the Performance Addon Operator must-gather image: --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.12. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator provided automatic, low latency performance tuning for applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. However, you must still use the performance-addon-operator-must-gather image when running the must-gather command. 13.6.3. Gathering data about specific features You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command. Note To collect the default must-gather data in addition to specific feature data, add the --image-stream=openshift/must-gather argument. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator provided automatic, low latency performance tuning for applications. In OpenShift Container Platform 4.11, these functions are part of the Node Tuning Operator. However, you must still use the performance-addon-operator-must-gather image when running the must-gather command. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI (oc) installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to the Node Tuning Operator: USD oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.12 2 1 The default OpenShift Container Platform must-gather image. 2 The must-gather image for low latency tuning diagnostics. Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the Red Hat Customer Portal . Additional resources For more information about MachineConfig and KubeletConfig, see Managing nodes . For more information about the Node Tuning Operator, see Using the Node Tuning Operator . For more information about the PerformanceProfile, see Configuring huge pages . For more information about consuming huge pages from your containers, see How huge pages are consumed by apps .
[ "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-rt labels: machineconfiguration.openshift.io/role: worker-rt spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-rt], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-rt: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: realTimeKernel: enabled: true nodeSelector: node-role.kubernetes.io/worker-rt: \"\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-rt", "oc describe mcp/worker-rt", "Name: worker-rt Namespace: Labels: machineconfiguration.openshift.io/role=worker-rt", "oc get node -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME rt-worker-0.example.com Ready worker,worker-rt 5d17h v1.25.0 128.66.135.107 <none> Red Hat Enterprise Linux CoreOS 46.82.202008252340-0 (Ootpa) 4.18.0-305.30.1.rt7.102.el8_4.x86_64 cri-o://1.25.0-99.rhaos4.10.gitc3131de.el8 [...]", "apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: \"200Mi\" cpu: \"1\" requests: memory: \"200Mi\" cpu: \"1\"", "oc apply -f qos-pod.yaml --namespace=qos-example", "oc get pod qos-demo --namespace=qos-example --output=yaml", "spec: containers: status: qosClass: Guaranteed", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile status: runtimeClass: performance-manual", "apiVersion: v1 kind: Pod metadata: annotations: cpu-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>", "apiVersion: v1 kind: Pod metadata: name: example spec: # nodeSelector: node-role.kubernetes.io/worker-rt: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - intel_idle.max_cstate=0 - idle=poll cpu: isolated: \"2-23,26-47\" reserved: \"0,1,24,25\" offlined: \"48-59\" 1 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: single-numa-node realTimeKernel: enabled: true", "oc apply -f my-performance-profile.yaml", "annotations: cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"<governor>\"", "podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.12 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node --must-gather-dir-path /must-gather -power-consumption-mode=low-latency \\ 1 --per-pod-power-management=true > my-performance-profile.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: [.....] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - cpufreq.default_governor=schedutil 1", "spec: profile: - data: | [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct = <x> 1", "apiVersion: v1 kind: Pod metadata: annotations: cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"<governor>\" spec: runtimeClassName: performance-<profile_name>", "apiVersion: v1 kind: Pod metadata: annotations: cpu-quota.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: globallyDisableIrqLoadBalancing: true", "apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: irq-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"4-15\" 1 reserved: \"0-3\" 2 hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 16 node: 0 realTimeKernel: enabled: true 3 numa: 4 topologyPolicy: \"best-effort\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" 5", "hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 4 node: 0 1", "oc debug node/ip-10-0-141-105.ec2.internal", "grep -i huge /proc/meminfo", "AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ##", "oc describe node worker-0.ocp4poc.example.com | grep -i huge", "hugepages-1g=true hugepages-###: ### hugepages-###: ###", "spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: dynamic-irq-profile spec: cpu: isolated: 2-5 reserved: 0-1", "apiVersion: v1 kind: Pod metadata: name: dynamic-irq-pod annotations: irq-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: dynamic-irq-pod image: \"registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12\" command: [\"sleep\", \"10h\"] resources: requests: cpu: 2 memory: \"200M\" limits: cpu: 2 memory: \"200M\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" runtimeClassName: performance-dynamic-irq-profile", "oc get pod -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dynamic-irq-pod 1/1 Running 0 5h33m <ip-address> <node-name> <none> <none>", "oc exec -it dynamic-irq-pod -- /bin/bash -c \"grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'\"", "Cpus_allowed_list: 2-3", "oc debug node/<node-name>", "Starting pod/<node-name>-debug To use host binaries, run `chroot /host` Pod IP: <ip-address> If you don't see a command prompt, try pressing enter. sh-4.4#", "sh-4.4# chroot /host", "sh-4.4#", "cat /proc/irq/default_smp_affinity", "33", "find /proc/irq/ -name smp_affinity_list -exec sh -c 'i=\"USD1\"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \\;", "/proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5", "find /proc/irq -name effective_affinity -printf \"%p: \" -exec cat {} \\;", "/proc/irq/0/effective_affinity: 1 /proc/irq/1/effective_affinity: 8 /proc/irq/2/effective_affinity: 0 /proc/irq/3/effective_affinity: 1 /proc/irq/4/effective_affinity: 2 /proc/irq/5/effective_affinity: 1 /proc/irq/6/effective_affinity: 1 /proc/irq/7/effective_affinity: 1 /proc/irq/8/effective_affinity: 1 /proc/irq/9/effective_affinity: 2 /proc/irq/10/effective_affinity: 1 /proc/irq/11/effective_affinity: 1 /proc/irq/12/effective_affinity: 4 /proc/irq/13/effective_affinity: 1 /proc/irq/14/effective_affinity: 1 /proc/irq/15/effective_affinity: 1 /proc/irq/24/effective_affinity: 2 /proc/irq/25/effective_affinity: 4 /proc/irq/26/effective_affinity: 2 /proc/irq/27/effective_affinity: 1 /proc/irq/28/effective_affinity: 8 /proc/irq/29/effective_affinity: 4 /proc/irq/30/effective_affinity: 4 /proc/irq/31/effective_affinity: 8 /proc/irq/32/effective_affinity: 8 /proc/irq/33/effective_affinity: 1 /proc/irq/34/effective_affinity: 2", "lscpu --all --extended", "CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000", "cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list", "0-4", "cpu: isolated: 0,4 reserved: 1-3,5-7", "\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - nosmt cpu: isolated: 2-3 reserved: 0-1 hugepages: defaultHugepagesSize: 1G pages: - count: 2 node: 0 size: 1G nodeSelector: node-role.kubernetes.io/performance: '' realTimeKernel: enabled: true", "workloadHints: highPowerConsumption: false realTime: false", "workloadHints: highPowerConsumption: false realTime: true", "workloadHints: highPowerConsumption: true realTime: true", "workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: workload-hints spec: workloadHints: highPowerConsumption: true 1 realTime: true 2", "\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: \"0-4,9\" 1 isolated: \"5-8\" 2 nodeSelector: 3 node-role.kubernetes.io/worker: \"\"", "oc edit -f <your_profile_name>.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - interfaceName: \"eth1\" - vendorID: \"0x1af4\" deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth*\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"!eno1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - vendorID: \"0x1af4\" deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "oc apply -f <your_profile_name>.yaml", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true", "ethtool -l <device>", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - vendorID = 0x1af4", "ethtool -l <device>", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "udevadm info -p /sys/class/net/ens4 E: ID_MODEL_ID=0x1000 E: ID_VENDOR_ID=0x1af4 E: INTERFACE=ens4", "udevadm info -p /sys/class/net/eth0 E: ID_MODEL_ID=0x1002 E: ID_VENDOR_ID=0x1001 E: INTERFACE=eth0", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - interfaceName = eth0 - vendorID = 0x1af4", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3", "WARNING tuned.plugins.base: instance net_test: no matching devices available", "Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h", "oc describe mcp worker-cnf", "Message: Node node-worker-cnf is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\" Reason: 1 nodes are reporting degraded status on sync", "oc describe performanceprofiles performance", "Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\". Reason: MCPDegraded Status: True Type: Degraded", "--image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.12.", "oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.12 2", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/scalability_and_performance/cnf-low-latency-tuning
Chapter 13. Configure InfiniBand and RDMA Networks
Chapter 13. Configure InfiniBand and RDMA Networks 13.1. Understanding InfiniBand and RDMA technologies InfiniBand refers to two distinct things. The first is a physical link-layer protocol for InfiniBand networks. The second is a higher level programming API called the InfiniBand Verbs API. The InfiniBand Verbs API is an implementation of a remote direct memory access ( RDMA ) technology. RDMA provides direct access from the memory of one computer to the memory of another without involving either computer's operating system. This technology enables high-throughput, low-latency networking with low CPU utilization, which is especially useful in massively parallel computer clusters. In a typical IP data transfer, application X on machine A sends some data to application Y on machine B. As part of the transfer, the kernel on machine B must first receive the data, decode the packet headers, determine that the data belongs to application Y, wake up application Y, wait for application Y to perform a read syscall into the kernel, then it must manually copy the data from the kernel's own internal memory space into the buffer provided by application Y. This process means that most network traffic must be copied across the system's main memory bus at least twice (once when the host adapter uses DMA to put the data into the kernel-provided memory buffer, and again when the kernel moves the data to the application's memory buffer) and it also means the computer must execute a number of context switches to switch between kernel context and application Y context. Both of these things impose extremely high CPU loads on the system when network traffic is flowing at very high rates and can make other tasks to slow down. RDMA communications differ from normal IP communications because they bypass kernel intervention in the communication process, and in the process greatly reduce the CPU overhead normally needed to process network communications. The RDMA protocol allows the host adapter in the machine to know when a packet comes in from the network, which application should receive that packet, and where in the application's memory space it should go. Instead of sending the packet to the kernel to be processed and then copied into the user application's memory, it places the contents of the packet directly in the application's buffer without any further intervention necessary. However, it cannot be accomplished using the standard Berkeley Sockets API that most IP networking applications are built upon, so it must provide its own API, the InfiniBand Verbs API, and applications must be ported to this API before they can use RDMA technology directly. Red Hat Enterprise Linux 7 supports both the InfiniBand hardware and the InfiniBand Verbs API. In addition, there are two additional supported technologies that allow the InfiniBand Verbs API to be utilized on non-InfiniBand hardware: The Internet Wide Area RDMA Protocol (iWARP) iWARP is a computer networking protocol that implements remote direct memory access (RDMA) for efficient data transfer over Internet Protocol (IP) networks. The RDMA over Converged Ethernet (RoCE) protocol, which later renamed to InfiniBand over Ethernet (IBoE). RoCE is a network protocol that allows remote direct memory access (RDMA) over an Ethernet network. Prerequisites Both iWARP and RoCE technologies have a normal IP network link layer as their underlying technology, and so the majority of their configuration is actually covered in Chapter 3, Configuring IP Networking . For the most part, once their IP networking features are properly configured, their RDMA features are all automatic and will show up as long as the proper drivers for the hardware are installed. The kernel drivers are always included with each kernel Red Hat provides, however the user-space drivers must be installed manually if the InfiniBand package group was not selected at machine install time. Since Red Hat Enterprise Linux 7.4, all RDMA user-space drivers are merged into the rdma-core package. To install all supported iWARP, RoCE or InfiniBand user-space drivers, enter as root : If you are using Priority Flow Control (PFC) and mlx4-based cards, then edit /etc/modprobe.d/mlx4.conf to instruct the driver which packet priority is configured for the " no-drop " service on the Ethernet switches the cards are plugged into and rebuild the initramfs to include the modified file. Newer mlx5-based cards auto-negotiate PFC settings with the switch and do not need any module option to inform them of the " no-drop " priority or priorities. To set the Mellanox cards to use one or both ports in Ethernet mode, see Section 13.5.4, "Configuring Mellanox cards for Ethernet operation" . With these driver packages installed (in addition to the normal RDMA packages typically installed for any InfiniBand installation), a user should be able to utilize most of the normal RDMA applications to test and see RDMA protocol communication taking place on their adapters. However, not all of the programs included in Red Hat Enterprise Linux 7 properly support iWARP or RoCE/IBoE devices. This is because the connection establishment protocol on iWARP in particular is different than it is on real InfiniBand link-layer connections. If the program in question uses the librdmacm connection management library, it handles the differences between iWARP and InfiniBand silently and the program should work. If the application tries to do its own connection management, then it must specifically support iWARP or else it does not work.
[ "~]# yum install libibverbs" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_infiniband_and_rdma_networks
4.3. Configuring Redundant Ring Protocol (RRP)
4.3. Configuring Redundant Ring Protocol (RRP) Note Red Hat supports the configuration of Redundant Ring Protocol (RRP) in clusters subject to the conditions described in the "Redundant Ring Protocol (RRP)" section of Support Policies for RHEL High Availability Clusters - Cluster Interconnect Network Interfaces . When you create a cluster with the pcs cluster setup command, you can configure a cluster with Redundant Ring Protocol by specifying both interfaces for each node. When using the default udpu transport, when you specify the cluster nodes you specify the ring 0 address followed by a ',', then the ring 1 address. For example, the following command configures a cluster named my_rrp_clusterM with two nodes, node A and node B. Node A has two interfaces, nodeA-0 and nodeA-1 . Node B has two interfaces, nodeB-0 and nodeB-1 . To configure these nodes as a cluster using RRP, execute the following command. For information on configuring RRP in a cluster that uses udp transport, see the help screen for the pcs cluster setup command.
[ "pcs cluster setup --name my_rrp_cluster nodeA-0,nodeA-1 nodeB-0,nodeB-1" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-configrrp-haar
B.40. libcap-ng
B.40. libcap-ng B.40.1. RHBA-2010:0906 - libcap-ng bug fix update Updated libcap-ng packages that fix a bug are now available for Red Hat Enterprise Linux 6. The libcap-ng library is designed to make programming with POSIX capabilities easier. It is shipped with utilities to analyze the POSIX capabilities of all running applications, as well as tools to set the file system-based capabilities. Bug Fix BZ# 650131 Previously, when listing the file system based capabilities of a single file with the "filecap" utility, it would terminate with a segmentation fault. This error has been fixed, and "filecap" no longer crashes when attempting to list the capabilities of a single file. Users are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/libcap-ng