title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
5.8. Using Zones to Manage Incoming Traffic Depending on Source | 5.8. Using Zones to Manage Incoming Traffic Depending on Source You can use zones to manage incoming traffic based on its source. That enables you to sort incoming traffic and route it through different zones to allow or disallow services that can be reached by that traffic. If you add a source to a zone, the zone becomes active and any incoming traffic from that source will be directed through it. You can specify different settings for each zone, which is applied to the traffic from the given sources accordingly. You can use more zones even if you only have one network interface. 5.8.1. Adding a Source To route incoming traffic into a specific source, add the source to that zone. The source can be an IP address or an IP mask in the Classless Inter-domain Routing (CIDR) notation. To set the source in the current zone: To set the source IP address for a specific zone: The following procedure allows all incoming traffic from 192.168.2.15 in the trusted zone: List all available zones: Add the source IP to the trusted zone in the permanent mode: Make the new settings persistent: 5.8.2. Removing a Source Removing a source from the zone cuts off the traffic coming from it. List allowed sources for the required zone: Remove the source from the zone permanently: Make the new settings persistent: 5.8.3. Adding a Source Port To enable sorting the traffic based on a port of origin, specify a source port using the --add-source-port option. You can also combine this with the --add-source option to limit the traffic to a certain IP address or IP range. To add a source port: 5.8.4. Removing a Source Port By removing a source port you disable sorting the traffic based on a port of origin. To remove a source port: 5.8.5. Using Zones and Sources to Allow a Service for Only a Specific Domain To allow traffic from a specific network to use a service on a machine, use zones and source. The following procedure allows only HTTP traffic from the 192.0.2.0/24 network while any other traffic is blocked. Warning When you configure this scenario, use a zone that has the default target. Using a zone that has the target set to ACCEPT is a security risk, because for traffic from 192.0.2.0/24 , all network connections would be accepted. List all available zones: Add the IP range to the internal zone to route the traffic originating from the source through the zone: Add the http service to the internal zone: Make the new settings persistent: Check that the internal zone is active and that the service is allowed in it: 5.8.6. Configuring Traffic Accepted by a Zone Based on Protocol You can allow incoming traffic to be accepted by a zone based on the protocol. All traffic using the specified protocol is accepted by a zone, in which you can apply further rules and filtering. Adding a Protocol to a Zone By adding a protocol to a certain zone, you allow all traffic with this protocol to be accepted by this zone. To add a protocol to a zone: Note To receive multicast traffic, use the igmp value with the --add-protocol option. Removing a Protocol from a Zone By removing a protocol from a certain zone, you stop accepting all traffic based on this protocol by the zone. To remove a protocol from a zone: | [
"~]# firewall-cmd --add-source=<source>",
"~]# firewall-cmd --zone= zone-name --add-source=<source>",
"~]# firewall-cmd --get-zones",
"~]# firewall-cmd --zone=trusted --add-source= 192.168.2.15",
"~]# firewall-cmd --runtime-to-permanent",
"~]# firewall-cmd --zone= zone-name --list-sources",
"~]# firewall-cmd --zone= zone-name --remove-source=<source>",
"~]# firewall-cmd --runtime-to-permanent",
"~]# firewall-cmd --zone= zone-name --add-source-port=<port-name>/<tcp|udp|sctp|dccp>",
"~]# firewall-cmd --zone= zone-name --remove-source-port=<port-name>/<tcp|udp|sctp|dccp>",
"~]# firewall-cmd --get-zones block dmz drop external home internal public trusted work",
"~]# firewall-cmd --zone=internal --add-source=192.0.2.0/24",
"~]# firewall-cmd --zone=internal --add-service=http",
"~]# firewall-cmd --runtime-to-permanent",
"~]# firewall-cmd --zone=internal --list-all internal (active) target: default icmp-block-inversion: no interfaces: sources: 192.0.2.0/24 services: dhcpv6-client mdns samba-client ssh http",
"~]# firewall-cmd --zone= zone-name --add-protocol= port-name / tcp|udp|sctp|dccp|igmp",
"~]# firewall-cmd --zone= zone-name --remove-protocol= port-name / tcp|udp|sctp|dccp|igmp"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-using_zones_to_manage_incoming_traffic_depending_on_source |
Chapter 5. Installing a cluster on Azure with customizations | Chapter 5. Installing a cluster on Azure with customizations In OpenShift Container Platform version 4.16, you can install a customized cluster on infrastructure that the installation program provisions on Microsoft Azure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.4. Using the Azure Marketplace offering Using the Azure Marketplace offering lets you deploy an OpenShift Container Platform cluster, which is billed on pay-per-use basis (hourly, per core) through Azure, while still being supported directly by Red Hat. To deploy an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker or control plane nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 Note Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. You must update the compute section in the install-config.yaml file with values for publisher , offer , sku , and version before deploying the cluster. You may also update the controlPlane section to deploy control plane machines with the specified image details, or the defaultMachinePlatform section to deploy both control plane and compute machines with the specified image details. Use the latest available image for control plane and compute nodes. Sample install-config.yaml file with the Azure Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 413.92.2023101700 replicas: 3 5.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on Azure". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 5.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 5.6.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 5.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 5.6.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 5.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 5.6.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5.6.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 5.6.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 1 10 14 16 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 17 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 18 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.6.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 5.7. Configuring user-defined tags for Azure In OpenShift Container Platform, you can use the tags for grouping resources and for managing resource access and cost. You can define the tags on the Azure resources in the install-config.yaml file only during OpenShift Container Platform cluster creation. You cannot modify the user-defined tags after cluster creation. Support for user-defined tags is available only for the resources created in the Azure Public Cloud. User-defined tags are not supported for the OpenShift Container Platform clusters upgraded to OpenShift Container Platform 4.16. User-defined and OpenShift Container Platform specific tags are applied only to the resources created by the OpenShift Container Platform installer and its core operators such as Machine api provider azure Operator, Cluster Ingress Operator, Cluster Image Registry Operator. By default, OpenShift Container Platform installer attaches the OpenShift Container Platform tags to the Azure resources. These OpenShift Container Platform tags are not accessible for the users. You can use the .platform.azure.userTags field in the install-config.yaml file to define the list of user-defined tags as shown in the following install-config.yaml file. Sample install-config.yaml file additionalTrustBundlePolicy: Proxyonly 1 apiVersion: v1 baseDomain: catchall.azure.devcluster.openshift.com 2 compute: 3 - architecture: amd64 hyperthreading: Enabled 4 name: worker platform: {} replicas: 3 controlPlane: 5 architecture: amd64 hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: user 7 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: os4-common 9 cloudName: AzurePublicCloud 10 outboundType: Loadbalancer region: southindia 11 userTags: 12 createdBy: user environment: dev 1 Defines the trust bundle policy. 2 Required. The baseDomain parameter specifies the base domain of your cloud provider. The installation program prompts you for this value. 3 The configuration for the machines that comprise compute. The compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - . If you do not provide these parameters and values, the installation program provides the default value. 4 To enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 5 The configuration for the machines that comprise the control plane. The controlPlane section is a single mapping. The first line of the controlPlane section must not begin with a hyphen, - . You can use only one control plane pool. If you do not provide these parameters and values, the installation program provides the default value. 6 To enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 7 The installation program prompts you for this value. 8 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 9 Specifies the resource group for the base domain of the Azure DNS zone. 10 Specifies the name of the Azure cloud environment. You can use the cloudName field to configure the Azure SDK with the Azure API endpoints. If you do not provide value, the default value is Azure Public Cloud. 11 Required. Specifies the name of the Azure region that hosts your cluster. The installation program prompts you for this value. 12 Defines the additional keys and values that the installation program adds as tags to all Azure resources that it creates. The user-defined tags have the following limitations: A tag key can have a maximum of 128 characters. A tag key must begin with a letter, end with a letter, number or underscore, and can contain only letters, numbers, underscores, periods, and hyphens. Tag keys are case-insensitive. Tag keys cannot be name . It cannot have prefixes such as kubernetes.io , openshift.io , microsoft , azure , and windows . A tag value can have a maximum of 256 characters. You can configure a maximum of 10 tags for resource group and resources. For more information about Azure tags, see Azure user-defined tags 5.8. Querying user-defined tags for Azure After creating the OpenShift Container Platform cluster, you can access the list of defined tags for the Azure resources. The format of the OpenShift Container Platform tags is kubernetes.io_cluster.<cluster_id>:owned . The cluster_id parameter is the value of .status.infrastructureName present in config.openshift.io/Infrastructure . Query the tags defined for Azure resources by running the following command: USD oc get infrastructures.config.openshift.io cluster -o=jsonpath-as-json='{.status.platformStatus.azure.resourceTags}' Example output [ [ { "key": "createdBy", "value": "user" }, { "key": "environment", "value": "dev" } ] ] 5.9. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.10. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 5.10.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 5.10.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 5.10.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 5.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 5.10.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 5.10.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 5.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 413.92.2023101700 replicas: 3",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"additionalTrustBundlePolicy: Proxyonly 1 apiVersion: v1 baseDomain: catchall.azure.devcluster.openshift.com 2 compute: 3 - architecture: amd64 hyperthreading: Enabled 4 name: worker platform: {} replicas: 3 controlPlane: 5 architecture: amd64 hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: user 7 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: os4-common 9 cloudName: AzurePublicCloud 10 outboundType: Loadbalancer region: southindia 11 userTags: 12 createdBy: user environment: dev",
"oc get infrastructures.config.openshift.io cluster -o=jsonpath-as-json='{.status.platformStatus.azure.resourceTags}'",
"[ [ { \"key\": \"createdBy\", \"value\": \"user\" }, { \"key\": \"environment\", \"value\": \"dev\" } ] ]",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"az login",
"ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7",
"ls <path_to_ccoctl_output_dir>/manifests",
"azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure/installing-azure-customizations |
Chapter 1. Security APIs | Chapter 1. Security APIs 1.1. CertificateSigningRequest [certificates.k8s.io/v1] Description CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued. Kubelets use this API to obtain: 1. client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client-kubelet" signerName). 2. serving certificates for TLS endpoints kube-apiserver can connect to securely (with the "kubernetes.io/kubelet-serving" signerName). This API can be used to request client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client" signerName), or to obtain certificates from custom non-Kubernetes signers. Type object 1.2. CredentialsRequest [cloudcredential.openshift.io/v1] Description CredentialsRequest is the Schema for the credentialsrequests API Type object 1.3. PodSecurityPolicyReview [security.openshift.io/v1] Description PodSecurityPolicyReview checks which service accounts (not users, since that would be cluster-wide) can create the PodTemplateSpec in question. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.4. PodSecurityPolicySelfSubjectReview [security.openshift.io/v1] Description PodSecurityPolicySelfSubjectReview checks whether this user/SA tuple can create the PodTemplateSpec Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.5. PodSecurityPolicySubjectReview [security.openshift.io/v1] Description PodSecurityPolicySubjectReview checks whether a particular user/SA tuple can create the PodTemplateSpec. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.6. RangeAllocation [security.openshift.io/v1] Description RangeAllocation is used so we can easily expose a RangeAllocation typed for security group Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object 1.7. Secret [v1] Description Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes. Type object 1.8. SecurityContextConstraints [security.openshift.io/v1] Description SecurityContextConstraints governs the ability to make requests that affect the SecurityContext that will be applied to a container. For historical reasons SCC was exposed under the core Kubernetes API group. That exposure is deprecated and will be removed in a future release - users should instead use the security.openshift.io group to manage SecurityContextConstraints. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.9. ServiceAccount [v1] Description ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_apis/security-apis |
Chapter 7. Kafka Streams configuration properties | Chapter 7. Kafka Streams configuration properties application.id Type: string Importance: high An identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-id prefix, 2) the group-id for membership management, 3) the changelog topic prefix. bootstrap.servers Type: list Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). num.standby.replicas Type: int Default: 0 Importance: high The number of standby replicas for each task. state.dir Type: string Default: /tmp/kafka-streams Importance: high Directory location for state store. This path must be unique for each streams instance sharing the same underlying filesystem. acceptable.recovery.lag Type: long Default: 10000 Valid Values: [0,... ] Importance: medium The maximum acceptable lag (number of offsets to catch up) for a client to be considered caught-up enough to receive an active task assignment. Upon assignment, it will still restore the rest of the changelog before processing. To avoid a pause in processing during rebalances, this config should correspond to a recovery time of well under a minute for a given workload. Must be at least 0. cache.max.bytes.buffering Type: long Default: 10485760 Valid Values: [0,... ] Importance: medium Maximum number of memory bytes to be used for buffering across all threads. client.id Type: string Default: "" Importance: medium An ID prefix string used for the client IDs of internal [main-|restore-|global-]consumer, producer, and admin clients with pattern <client.id>-[Global]StreamThread[-<threadSequenceNumberUSDgt;]-<consumer|producer|restore-consumer|global-consumer> . default.deserialization.exception.handler Type: class Default: org.apache.kafka.streams.errors.LogAndFailExceptionHandler Importance: medium Exception handling class that implements the org.apache.kafka.streams.errors.DeserializationExceptionHandler interface. default.key.serde Type: class Default: null Importance: medium Default serializer / deserializer class for key that implements the org.apache.kafka.common.serialization.Serde interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the org.apache.kafka.common.serialization.Serde interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as well. default.list.key.serde.inner Type: class Default: null Importance: medium Default inner class of list serde for key that implements the org.apache.kafka.common.serialization.Serde interface. This configuration will be read if and only if default.key.serde configuration is set to org.apache.kafka.common.serialization.Serdes.ListSerde . default.list.key.serde.type Type: class Default: null Importance: medium Default class for key that implements the java.util.List interface. This configuration will be read if and only if default.key.serde configuration is set to org.apache.kafka.common.serialization.Serdes.ListSerde Note when list serde class is used, one needs to set the inner serde class that implements the org.apache.kafka.common.serialization.Serde interface via 'default.list.key.serde.inner'. default.list.value.serde.inner Type: class Default: null Importance: medium Default inner class of list serde for value that implements the org.apache.kafka.common.serialization.Serde interface. This configuration will be read if and only if default.value.serde configuration is set to org.apache.kafka.common.serialization.Serdes.ListSerde . default.list.value.serde.type Type: class Default: null Importance: medium Default class for value that implements the java.util.List interface. This configuration will be read if and only if default.value.serde configuration is set to org.apache.kafka.common.serialization.Serdes.ListSerde Note when list serde class is used, one needs to set the inner serde class that implements the org.apache.kafka.common.serialization.Serde interface via 'default.list.value.serde.inner'. default.production.exception.handler Type: class Default: org.apache.kafka.streams.errors.DefaultProductionExceptionHandler Importance: medium Exception handling class that implements the org.apache.kafka.streams.errors.ProductionExceptionHandler interface. default.timestamp.extractor Type: class Default: org.apache.kafka.streams.processor.FailOnInvalidTimestamp Importance: medium Default timestamp extractor class that implements the org.apache.kafka.streams.processor.TimestampExtractor interface. default.value.serde Type: class Default: null Importance: medium Default serializer / deserializer class for value that implements the org.apache.kafka.common.serialization.Serde interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the org.apache.kafka.common.serialization.Serde interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as well. max.task.idle.ms Type: long Default: 0 Importance: medium This config controls whether joins and merges may produce out-of-order results. The config value is the maximum amount of time in milliseconds a stream task will stay idle when it is fully caught up on some (but not all) input partitions to wait for producers to send additional records and avoid potential out-of-order record processing across multiple input streams. The default (zero) does not wait for producers to send more records, but it does wait to fetch data that is already present on the brokers. This default means that for records that are already present on the brokers, Streams will process them in timestamp order. Set to -1 to disable idling entirely and process any locally available data, even though doing so may produce out-of-order processing. max.warmup.replicas Type: int Default: 2 Valid Values: [1,... ] Importance: medium The maximum number of warmup replicas (extra standbys beyond the configured num.standbys) that can be assigned at once for the purpose of keeping the task available on one instance while it is warming up on another instance it has been reassigned to. Used to throttle how much extra broker traffic and cluster state can be used for high availability. Must be at least 1.Note that one warmup replica corresponds to one Stream Task. Furthermore, note that each warmup replica can only be promoted to an active task during a rebalance (normally during a so-called probing rebalance, which occur at a frequency specified by the probing.rebalance.interval.ms config). This means that the maximum rate at which active tasks can be migrated from one Kafka Streams Instance to another instance can be determined by ( max.warmup.replicas / probing.rebalance.interval.ms ). num.stream.threads Type: int Default: 1 Importance: medium The number of threads to execute stream processing. processing.guarantee Type: string Default: at_least_once Valid Values: [at_least_once, exactly_once, exactly_once_beta, exactly_once_v2] Importance: medium The processing guarantee that should be used. Possible values are at_least_once (default) and exactly_once_v2 (requires brokers version 2.5 or higher). Deprecated options are exactly_once (requires brokers version 0.11.0 or higher) and exactly_once_beta (requires brokers version 2.5 or higher). Note that exactly-once processing requires a cluster of at least three brokers by default what is the recommended setting for production; for development you can change this, by adjusting broker setting transaction.state.log.replication.factor and transaction.state.log.min.isr . rack.aware.assignment.non_overlap_cost Type: int Default: null Importance: medium Cost associated with moving tasks from existing assignment. This config and rack.aware.assignment.traffic_cost controls whether the optimization algorithm favors minimizing cross rack traffic or minimize the movement of tasks in existing assignment. If set a larger value org.apache.kafka.streams.processor.internals.assignment.RackAwareTaskAssignor will optimize to maintain the existing assignment. The default value is null which means it will use default non_overlap cost values in different assignors. rack.aware.assignment.strategy Type: string Default: none Valid Values: [none, min_traffic, balance_subtopology] Importance: medium The strategy we use for rack aware assignment. Rack aware assignment will take client.rack and racks of TopicPartition into account when assigning tasks to minimize cross rack traffic. Valid settings are : none (default), which will disable rack aware assignment; min_traffic , which will compute minimum cross rack traffic assignment; balance_subtopology , which will compute minimum cross rack traffic and try to balance the tasks of same subtopolgies across different clients. rack.aware.assignment.tags Type: list Default: "" Valid Values: List containing maximum of 5 elements Importance: medium List of client tag keys used to distribute standby replicas across Kafka Streams instances. When configured, Kafka Streams will make a best-effort to distribute the standby tasks over each client tag dimension. rack.aware.assignment.traffic_cost Type: int Default: null Importance: medium Cost associated with cross rack traffic. This config and rack.aware.assignment.non_overlap_cost controls whether the optimization algorithm favors minimizing cross rack traffic or minimize the movement of tasks in existing assignment. If set a larger value org.apache.kafka.streams.processor.internals.assignment.RackAwareTaskAssignor will optimize for minimizing cross rack traffic. The default value is null which means it will use default traffic cost values in different assignors. replication.factor Type: int Default: -1 Importance: medium The replication factor for change log topics and repartition topics created by the stream processing application. The default of -1 (meaning: use broker default replication factor) requires broker version 2.4 or newer. security.protocol Type: string Default: PLAINTEXT Valid Values: (case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT] Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. statestore.cache.max.bytes Type: long Default: 10485760 (10 mebibytes) Valid Values: [0,... ] Importance: medium Maximum number of memory bytes to be used for statestore cache across all threads. task.timeout.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: medium The maximum amount of time in milliseconds a task might stall due to internal errors and retries until an error is raised. For a timeout of 0ms, a task would raise an error for the first internal error. For any timeout larger than 0ms, a task will retry at least once before an error is raised. topology.optimization Type: string Default: none Valid Values: org.apache.kafka.streams.StreamsConfigUSDUSDLambdaUSD31/0x00007f269c00d000@7e32c033 Importance: medium A configuration telling Kafka Streams if it should optimize the topology and what optimizations to apply. Acceptable values are: "NO_OPTIMIZATION", "OPTIMIZE", or a comma separated list of specific optimizations: ("REUSE_KTABLE_SOURCE_TOPICS", "MERGE_REPARTITION_TOPICS" + "SINGLE_STORE_SELF_JOIN+")."NO_OPTIMIZATION" by default. application.server Type: string Default: "" Importance: low A host:port pair pointing to a user-defined endpoint that can be used for state store discovery and interactive queries on this KafkaStreams instance. auto.include.jmx.reporter Type: boolean Default: true Importance: low Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters . This configuration will be removed in Kafka 4.0, users should instead include org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to enable the JmxReporter. buffered.records.per.partition Type: int Default: 1000 Importance: low Maximum number of records to buffer per partition. built.in.metrics.version Type: string Default: latest Valid Values: [latest] Importance: low Version of the built-in metrics to use. commit.interval.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The frequency in milliseconds with which to commit processing progress. For at-least-once processing, committing means to save the position (ie, offsets) of the processor. For exactly-once processing, it means to commit the transaction which includes to save the position and to make the committed data in the output topic visible to consumers with isolation level read_committed. (Note, if processing.guarantee is set to exactly_once_v2 , exactly_once ,the default value is 100 , otherwise the default value is 30000 . connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: low Close idle connections after the number of milliseconds specified by this config. default.client.supplier Type: class Default: org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier Importance: low Client supplier class that implements the org.apache.kafka.streams.KafkaClientSupplier interface. default.dsl.store Type: string Default: rocksDB Valid Values: [rocksDB, in_memory] Importance: low The default state store type used by DSL operators. dsl.store.suppliers.class Type: class Default: org.apache.kafka.streams.state.BuiltInDslStoreSuppliersUSDRocksDBDslStoreSuppliers Importance: low Defines which store implementations to plug in to DSL operators. Must implement the org.apache.kafka.streams.state.DslStoreSuppliers interface. enable.metrics.push Type: boolean Default: true Importance: low Whether to enable pushing of internal [main-|restore-|global]consumer, producer, and admin client metrics to the cluster, if the cluster has a client metrics subscription which matches a client. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metric.reporters Type: list Default: "" Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG, TRACE] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. poll.ms Type: long Default: 100 Importance: low The amount of time in milliseconds to block waiting for input. probing.rebalance.interval.ms Type: long Default: 600000 (10 minutes) Valid Values: [60000,... ] Importance: low The maximum time in milliseconds to wait before triggering a rebalance to probe for warmup replicas that have finished warming up and are ready to become active. Probing rebalances will continue to be triggered until the assignment is balanced. Must be at least 1 minute. receive.buffer.bytes Type: int Default: 32768 (32 kibibytes) Valid Values: [-1,... ] Importance: low The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value. repartition.purge.interval.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The frequency in milliseconds with which to delete fully consumed records from repartition topics. Purging will occur after at least this value since the last purge, but may be delayed until later. (Note, unlike commit.interval.ms , the default for this value remains unchanged when processing.guarantee is set to exactly_once_v2 ). request.timeout.ms Type: int Default: 40000 (40 seconds) Valid Values: [0,... ] Importance: low The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. retries Type: int Default: 0 Valid Values: [0,... ,2147483647] Importance: low Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is recommended to set the value to either zero or MAX_VALUE and use corresponding timeout parameters to control how long a client should retry a request. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value. rocksdb.config.setter Type: class Default: null Importance: low A Rocks DB config setter class or class name that implements the org.apache.kafka.streams.state.RocksDBConfigSetter interface. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: low The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. state.cleanup.delay.ms Type: long Default: 600000 (10 minutes) Importance: low The amount of time in milliseconds to wait before deleting state when a partition has migrated. Only state directories that have not been modified for at least state.cleanup.delay.ms will be removed. upgrade.from Type: string Default: null Valid Values: [null, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6] Importance: low Allows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 2.4+. When upgrading from 3.3 to a newer version it is not required to specify this config. Default is null . Accepted values are "0.10.0", "0.10.1", "0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "2.7", "2.8", "3.0", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6(for upgrading from the corresponding old version). window.size.ms Type: long Default: null Importance: low Sets window size for the deserializer in order to calculate window end times. windowed.inner.class.serde Type: string Default: null Importance: low Default serializer / deserializer for the inner class of a windowed record. Must implement the org.apache.kafka.common.serialization.Serde interface. Note that setting this config in KafkaStreams application would result in an error as it is meant to be used only from Plain consumer client. windowstore.changelog.additional.retention.ms Type: long Default: 86400000 (1 day) Importance: low Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/kafka_configuration_properties/kafka-streams-configuration-properties-str |
8.11. Scanning and Remediating Configuration Compliance of Container Images and Containers Using atomic scan | 8.11. Scanning and Remediating Configuration Compliance of Container Images and Containers Using atomic scan 8.11.1. Scanning for Configuration Compliance of Container Images and Containers Using atomic scan Use this type of scanning to evaluate Red Hat Enterprise Linux-based container images and containers with the SCAP content provided by the SCAP Security Guide (SSG) bundled inside the OpenSCAP container image. This enables scanning against any profile provided by the SCAP Security Guide. Warning The atomic scan functionality is deprecated, and the OpenSCAP container image is no longer updated with the new security compliance content. Therefore, prefer the oscap-docker utility for security compliance scanning purposes. Note For a detailed description of the usage of the atomic command and containers, see the Product Documentation for Red Hat Enterprise Linux Atomic Host 7 . The Red Hat Customer Portal also provides a guide to the atomic command-line interface (CLI) . Prerequisites You have downloaded and installed the OpenSCAP container image from Red Hat Container Catalog (RHCC) using the atomic install rhel7/openscap command. Procedure List SCAP content provided by the OpenSCAP image for the configuration_compliance scan: Verify compliance of the latest Red Hat Enterprise Linux 7 container image with the Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) policy and generate an HTML report from the scan: The output of the command contains the information about files associated with the scan at the end: The atomic scan generates a subdirectory with all the results and reports from a scan in the /var/lib/atomic/openscap/ directory. The arf.xml file with results is generated on every scanning for configuration compliance. To generate a human-readable HTML report file, add the report suboption to the --scanner_args option. Optional: To generate XCCDF results readable by DISA STIG Viewer , add the stig-viewer suboption to the --scanner_args option. The results are placed in stig.xml . Note When the xccdf-id suboption of the --scanner_args option is omitted, the scanner searches for a profile in the first XCCDF component of the selected data stream file. For more details about data stream files, see Section 8.3.1, "Configuration Compliance in RHEL 7" . 8.11.2. Remediating Configuration Compliance of Container Images and Containers Using atomic scan You can run the configuration compliance scan against the original container image to check its compliance with the DISA STIG policy. Based on the scan results, a fix script containing bash remediations for the failed scan results is generated. The fix script is then applied to the original container image - this is called a remediation. The remediation results in a container image with an altered configuration, which is added as a new layer on top of the original container image. Important Note that the original container image remains unchanged and only a new layer is created on top of it. The remediation process builds a new container image that contains all the configuration improvements. The content of this layer is defined by the security policy of scanning - in the case, the DISA STIG policy. This also means that the remediated container image is no longer signed by Red Hat, which is expected, because it differs from the original container image by containing the remediated layer. Warning The atomic scan functionality is deprecated, and the OpenSCAP container image is no longer updated with the new security compliance content. Therefore, prefer the oscap-docker utility for security compliance scanning purposes. Prerequisites You have downloaded and installed the OpenSCAP container image from Red Hat Container Catalog (RHCC) using the atomic install rhel7/openscap command. Procedure List SCAP content provided by the OpenSCAP image for the configuration_compliance scan: To remediate container images to the specified policy, add the --remediate option to the atomic scan command when scanning for configuration compliance. The following command builds a new remediated container image compliant with the DISA STIG policy from the Red Hat Enterprise Linux 7 container image: Optional: The output of the atomic scan command reports a remediated image ID. To make the image easier to remember, tag it with some name, for example: | [
"~]# atomic help registry.access.redhat.com/rhel7/openscap",
"~]# atomic scan --scan_type configuration_compliance --scanner_args xccdf-id= scap_org.open-scap_cref_ssg-rhel7-xccdf-1.2.xml , profile= xccdf_org.ssgproject.content_profile_stig-rhel7-disa , report registry.access.redhat.com/rhel7:latest",
"......... Files associated with this scan are in /var/lib/atomic/openscap/2017-11-03-13-35-34-296606. ~]# tree /var/lib/atomic/openscap/2017-11-03-13-35-34-296606 /var/lib/atomic/openscap/2017-11-03-13-35-34-296606 ├── db7a70a0414e589d7c8c162712b329d4fc670fa47ddde721250fb9fcdbed9cc2 │ ├── arf.xml │ ├── fix.sh │ ├── json │ └── report.html └── environment.json 1 directory, 5 files",
"~]# atomic help registry.access.redhat.com/rhel7/openscap",
"~]# atomic scan --remediate --scan_type configuration_compliance --scanner_args profile= xccdf_org.ssgproject.content_profile_stig-rhel7-disa , report registry.access.redhat.com/rhel7:latest registry.access.redhat.com/rhel7:latest (db7a70a0414e589) The following issues were found: ......... Configure Time Service Maxpoll Interval Severity: Low XCCDF result: fail Configure LDAP Client to Use TLS For All Transactions Severity: Moderate XCCDF result: fail ......... Remediating rule 43/44: 'xccdf_org.ssgproject.content_rule_chronyd_or_ntpd_set_maxpoll' Remediating rule 44/44: 'xccdf_org.ssgproject.content_rule_ldap_client_start_tls' Successfully built 9bbc7083760e Successfully built remediated image 9bbc7083760e from db7a70a0414e589d7c8c162712b329d4fc670fa47ddde721250fb9fcdbed9cc2. Files associated with this scan are in /var/lib/atomic/openscap/2017-11-06-13-01-42-785000.",
"~]# docker tag 9bbc7083760e rhel7_disa_stig"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/scanning-and-remediating-configuration-compliance-images-containers-atomic_assessing-security-compliance-container-image-baseline_scanning-compliance-and-vulnerabilities |
CI/CD overview | CI/CD overview OpenShift Container Platform 4.18 Contains information about CI/CD for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cicd_overview/index |
Chapter 17. KubeAPIServer [operator.openshift.io/v1] | Chapter 17. KubeAPIServer [operator.openshift.io/v1] Description KubeAPIServer provides information to configure an operator to manage kube-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Kubernetes API Server status object status is the most recently observed status of the Kubernetes API Server 17.1.1. .spec Description spec is the specification of the desired behavior of the Kubernetes API Server Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 17.1.2. .status Description status is the most recently observed status of the Kubernetes API Server Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state serviceAccountIssuers array serviceAccountIssuers tracks history of used service account issuers. The item without expiration time represents the currently used service account issuer. The other items represents service account issuers that were used previously and are still being trusted. The default expiration for the items is set by the platform and it defaults to 24h. see: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection serviceAccountIssuers[] object version string version is the level this availability applies to 17.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 17.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 17.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 17.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 17.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 17.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 17.1.9. .status.serviceAccountIssuers Description serviceAccountIssuers tracks history of used service account issuers. The item without expiration time represents the currently used service account issuer. The other items represents service account issuers that were used previously and are still being trusted. The default expiration for the items is set by the platform and it defaults to 24h. see: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection Type array 17.1.10. .status.serviceAccountIssuers[] Description Type object Property Type Description expirationTime string expirationTime is the time after which this service account issuer will be pruned and removed from the trusted list of service account issuers. name string name is the name of the service account issuer --- 17.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/kubeapiservers DELETE : delete collection of KubeAPIServer GET : list objects of kind KubeAPIServer POST : create a KubeAPIServer /apis/operator.openshift.io/v1/kubeapiservers/{name} DELETE : delete a KubeAPIServer GET : read the specified KubeAPIServer PATCH : partially update the specified KubeAPIServer PUT : replace the specified KubeAPIServer /apis/operator.openshift.io/v1/kubeapiservers/{name}/status GET : read status of the specified KubeAPIServer PATCH : partially update status of the specified KubeAPIServer PUT : replace status of the specified KubeAPIServer 17.2.1. /apis/operator.openshift.io/v1/kubeapiservers HTTP method DELETE Description delete collection of KubeAPIServer Table 17.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeAPIServer Table 17.2. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServerList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeAPIServer Table 17.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.4. Body parameters Parameter Type Description body KubeAPIServer schema Table 17.5. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 201 - Created KubeAPIServer schema 202 - Accepted KubeAPIServer schema 401 - Unauthorized Empty 17.2.2. /apis/operator.openshift.io/v1/kubeapiservers/{name} Table 17.6. Global path parameters Parameter Type Description name string name of the KubeAPIServer HTTP method DELETE Description delete a KubeAPIServer Table 17.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 17.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeAPIServer Table 17.9. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeAPIServer Table 17.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.11. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeAPIServer Table 17.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.13. Body parameters Parameter Type Description body KubeAPIServer schema Table 17.14. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 201 - Created KubeAPIServer schema 401 - Unauthorized Empty 17.2.3. /apis/operator.openshift.io/v1/kubeapiservers/{name}/status Table 17.15. Global path parameters Parameter Type Description name string name of the KubeAPIServer HTTP method GET Description read status of the specified KubeAPIServer Table 17.16. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeAPIServer Table 17.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.18. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeAPIServer Table 17.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.20. Body parameters Parameter Type Description body KubeAPIServer schema Table 17.21. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 201 - Created KubeAPIServer schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operator_apis/kubeapiserver-operator-openshift-io-v1 |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/monitoring_openshift_data_foundation/making-open-source-more-inclusive |
Chapter 3. Red Hat support for cloud-init | Chapter 3. Red Hat support for cloud-init Red Hat supports the cloud-init utility, cloud-init modules, and default directories and files across various Red Hat products. 3.1. cloud-init significant directories and files By using directories and files in the following table, you can perform tasks such as: Configuring cloud-init Finding information about your configuration after cloud-init has run Examining log files Finding templates Depending on your scenario and datasource, there can be additional files and directories important to your configuration. Table 3.1. cloud-init directories and files Directory or File Description /etc/cloud/cloud.cfg The cloud.cfg file includes the basic cloud-init configuration and lets you know in what phase each module runs. /etc/cloud/cloud.cfg.d The cloud.cfg.d directory is where you can add additional directives for cloud-init . /var/lib/cloud When cloud-init runs, it creates a directory layout under /var/lib/cloud . The layout includes directories and files that give specifics on your instance configuration. /usr/share/doc/cloud-init/examples The examples directory includes multiple examples. You can use them to help model your own directives. /etc/cloud/templates This directory includes templates that you can enable in cloud-init for certain scenarios. The templates provide direction for enabling. /var/log/cloud-init.log The cloud-init.log file provides log information helpful for debugging. /run/cloud-init The /run/cloud-init directory includes logged information about your datasource and the ds-identify script. 3.2. Red Hat products that use cloud-init You can use cloud-init with these Red Hat products: Red Hat Virtualization. Once you install cloud-init on a VM, you can create a template and leverage cloud-init functions for all VMs created from that template. Refer to Using Cloud-Init to Automate the Configuration of Virtual Machines for information about using cloud-init with VMs. Red Hat OpenStack Platform. You can use cloud-init to help configure images for OpenStack. Refer to the Instances and Images Guide for more information. Red Hat Satellite. You can use cloud-init with Red Hat Satellite. Refer to Preparing Cloud-init Images in Red Hat Virtualization for more information. Red Hat OpenShift. You can use cloud-init when you create VMs for OpenShift. Refer to Creating Virtual Machines for more information. 3.3. Red Hat supports these cloud-init modules Red Hat supports most cloud-init modules. Individual modules can contain multiple configuration options. In the following table, you can find all of the cloud-init modules that Red Hat currently supports and provides a brief description and the default module frequency. Refer to Modules in the cloud-init Documentation section for complete descriptions and options for these modules. Table 3.2. Supported cloud-init modules cloud-init Module Description Default Module Frequency bootcmd Runs commands early in the boot process per always ca_certs Adds CA certificates per instance debug Enables or disables output of internal information to assist with debugging per instance disable_ec2_metadata Enables or disables the AWS EC2 metadata per always disk_setup Configures simple partition tables and file systems per instance final_message Specifies the output message once cloud-init completes per always foo Example shows module structure (Module does nothing) per instance growpart Resizes partitions to fill the available disk space per always keys_to_console Allows controls of fingerprints and keys that can be written to the console per instance landscape Installs and configures a landscape client per instance locale Configures the system locale and applies it system-wide per instance mcollective Installs, configures, and starts mcollective per instance migrator Moves old versions of cloud-init to newer versions per always mounts Configures mount points and swap files per instance phone_home Posts data to a remote host after boot completes per instance power_state_change Completes shutdown and reboot after all configuration modules have run per instance puppet Installs and configures puppet per instance resizefs Resizes a file system to use all available space on a partition per always resolv_conf Configures resolv.conf per instance rh_subscription Registers a Red Hat Enterprise Linux system per instance rightscale_userdata Adds support for RightScale configuration hooks to cloud-init per instance rsyslog Configures remote system logging using rsyslog per instance runcmd Runs arbitrary commands per instance salt_minion Installs, configures, and starts salt minion per instance scripts_per_boot Runs per boot scripts per always scripts_per_instance Runs per instance scripts per instance scripts_per_once Runs scripts once per once scripts_user Runs user scripts per instance scripts_vendor Runs vendor scripts per instance seed_random Provides random seed data per instance set_hostname Sets host name and fully qualified domain name (FQDN) per always set_passwords Sets user passwords and enables or disables SSH password authentication per instance ssh_authkey_fingerprints Logs fingerprints of user SSH keys per instance ssh_import_id Imports SSH keys per instance ssh Configures SSH, and host and authorized SSH keys per instance timezone Sets the system time zone per instance update_etc_hosts Updates /etc/hosts per always update_hostname Updates host name and FQDN per always users_groups Configures users and groups per instance write_files Writes arbitrary files per instance yum_add_repo Adds yum repository configuration to the system per always The following list of modules is not supported by Red Hat: Table 3.3. Modules not supported Module apt_configure apt_pipeline byobu chef emit_upstart grub_dpkg ubuntu_init_switch 3.4. The default cloud.cfg file The /etc/cloud/cloud.cfg file lists the modules comprising the basic configuration for cloud-init . The modules in the file are the default modules for cloud-init . You can configure the modules for your environment or remove modules you do not need. Modules that are included in cloud.cfg do not necessarily do anything by being listed in the file. You need to configure them individually if you want them to perform actions during one of the cloud-init phases. The cloud.cfg file provides the chronology for running individual modules. You can add additional modules to cloud.cfg as long as Red Hat supports the modules you want to add. The default contents of the file for Red Hat Enterprise Linux (RHEL) are as follows: Note Modules run in the order given in cloud.cfg . You typically do not change this order. The cloud.cfg directives can be overridden by user data. When running cloud-init manually, you can override cloud.cfg with command line options. Each module includes its own configuration options, where you can add specific information. To ensure optimal functionality of the configuration, prefer using module names with underscores ( _ ) rather than dashes ( - ). 1 Specifies the default user for the system. Refer to Users and Groups for more information. 2 Enables or disables root login. Refer to Authorized Keys for more information. 3 Specifies whether ssh is configured to accept password authentication. Refer to Set Passwords for more information. 4 Configures mount points; must be a list containing six values. Refer to Mounts for more information. 5 Specifies whether to remove default host SSH keys. Refer to Host Keys for more information. 6 Specifies key types to generate. Refer to Host Keys for more information. Note that for RHEL 8.4 and earlier, the default value of this line is ~ . 7 cloud-init runs at multiple stages of boot. Set this option so that cloud-init can log all stages to its log file. Find more information about this option in the cloud-config.txt file in the usr/share/doc/cloud-init/examples directory. 8 Enables or disables VMware vSphere customization 9 The modules in this section are services that run when the cloud-init service starts, early in the boot process. 10 These modules run during cloud-init configuration, after initial boot. 11 These modules run in the final phase of cloud-init , after the configuration finishes. 12 Specifies details about the default user. Refer to Users and Groups for more information. 13 Specifies the distribution 14 Specifies the main directory that contains cloud-init -specific subdirectories. Refer to Directory layout for more information. 15 Specifies where templates reside 16 The name of the SSH service Additional resources Modules 3.5. The cloud.cfg.d directory cloud-init acts upon directives that you provide and configure. Typically, those directives are included in the cloud.cfg.d directory. Note While you can configure modules by adding user data directives within the cloud.cfg file, as a best practice consider leaving cloud.cfg unmodified. Add your directives to the /etc/cloud/cloud.cfg.d directory. Adding directives to this directory can make future modifications and upgrades easier. There are multiple ways to add directives. You can include directives in a file named *.cfg , which includes the heading #cloud-config . Typically, the directory would contain multiple *cfg files. There are other options for adding directives, for example, you can add a user data script. Refer to User-Data Formats for more information. Additional resources Cloud config examples 3.6. The default 05_logging.cfg file The 05_logging.cfg file sets logging information for cloud-init . The /etc/cloud/cloud.cfg.d directory includes this file, along with other cloud-init directives that you add. cloud-init uses the logging configuration in 05_logging.cfg by default. The default contents of the file for Red Hat Enterprise Linux (RHEL) are as follows: Additional resources Logging 3.7. The cloud-init /var/lib/cloud directory layout When cloud-init first runs, it creates a directory layout that includes information about your instance and cloud-init configuration. The directory can include optional directories, such as /scripts/vendor . The following is a sample directory layout for cloud-init : Additional resources Directory layout | [
"users: 1 - default disable_root: true 2 resize_rootfs_tmp: /dev ssh_pwauth: false 3 mount_default_fields: [~, ~, 'auto', 'defaults,nofail,x-systemd.requires=cloud-init.service', '0', '2'] 4 ssh_deletekeys: true 5 ssh_genkeytypes: ['rsa', 'ecdsa', 'ed25519'] 6 syslog_fix_perms: ~ 7 disable_vmware_customization: false 8 cloud_init_modules: 9 - migrator - seed_random - bootcmd - write_files - growpart - resizefs - disk_setup - mounts - set_hostname - update_hostname - update_etc_hosts - ca_certs - rsyslog - users_groups - ssh cloud_config_modules: 10 - ssh_import_id - locale - set_passwords - rh_subscription - spacewalk - yum_add_repo - ntp - timezone - disable_ec2_metadata - runcmd cloud_final_modules: 11 - package_update_upgrade_install - write_files_deferred - puppet - chef - ansible - mcollective - salt_minion - reset_rmc - rightscale_userdata - scripts_vendor - scripts_per_once - scripts_per_boot - scripts_per_instance - scripts_user - ssh_authkey_fingerprints - keys_to_console - install_hotplug - phone_home - final_message - power_state_change system_info: default_user: 12 name: cloud-user lock_passwd: true gecos: Cloud User groups: [adm, systemd-journal] sudo: [\"ALL=(ALL) NOPASSWD:ALL\"] shell: /bin/bash distro: rhel 13 network: renderers: ['sysconfig', 'eni', 'netplan', 'network-manager', 'networkd'] paths: cloud_dir: /var/lib/cloud 14 templates_dir: /etc/cloud/templates 15 ssh_svcname: sshd 16 vim:syntax=yaml",
"## This yaml formatted config file handles setting ## logger information. The values that are necessary to be set ## are seen at the bottom. The top '_log' are only used to remove ## redundancy in a syslog and fallback-to-file case. ## ## The 'log_cfgs' entry defines a list of logger configs ## Each entry in the list is tried, and the first one that ## works is used. If a log_cfg list entry is an array, it will ## be joined with '\\n'. _log: - &log_base | [loggers] keys=root,cloudinit [handlers] keys=consoleHandler,cloudLogHandler [formatters] keys=simpleFormatter,arg0Formatter [logger_root] level=DEBUG handlers=consoleHandler,cloudLogHandler [logger_cloudinit] level=DEBUG qualname=cloudinit handlers= propagate=1 [handler_consoleHandler] class=StreamHandler level=WARNING formatter=arg0Formatter args=(sys.stderr,) [formatter_arg0Formatter] format=%(asctime)s - %(filename)s[%(levelname)s]: %(message)s [formatter_simpleFormatter] format=[CLOUDINIT] %(filename)s[%(levelname)s]: %(message)s - &log_file | [handler_cloudLogHandler] class=FileHandler level=DEBUG formatter=arg0Formatter args=('/var/log/cloud-init.log',) - &log_syslog | [handler_cloudLogHandler] class=handlers.SysLogHandler level=DEBUG formatter=simpleFormatter args=(\"/dev/log\", handlers.SysLogHandler.LOG_USER) log_cfgs: Array entries in this list will be joined into a string that defines the configuration. # If you want logs to go to syslog, uncomment the following line. - [ *log_base, *log_syslog ] # The default behavior is to just log to a file. This mechanism that does not depend on a system service to operate. - [ *log_base, *log_file ] A file path can also be used. - /etc/log.conf This tells cloud-init to redirect its stdout and stderr to 'tee -a /var/log/cloud-init-output.log' so the user can see output there without needing to look on the console. output: {all: '| tee -a /var/log/cloud-init-output.log'}",
"/var/lib/cloud/ - data/ - instance-id - previous-instance-id - previous-datasource - previous-hostname - result.json - set-hostname - status.json - handlers/ - instance - boot-finished - cloud-config.txt - datasource - handlers/ - obj.pkl - scripts/ - sem/ - user-data.txt - user-data.txt.i - vendor-data.txt - vendor-data.txt.i - instances/ f111ee00-0a4a-4eea-9c17-3fa164739c55/ - boot-finished - cloud-config.txt - datasource - handlers/ - obj.pkl - scripts/ - sem/ - user-data.txt - user-data.txt.i - vendor-data.txt - vendor-data.txt.i - scripts/ - per-boot/ - per-instance/ - per-once/ - vendor/ - seed/ - sem/ - config_scripts_per_once.once"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_cloud-init_for_rhel_8/red-hat-support-for-cloud-init_cloud-content |
Chapter 3. Metrics | Chapter 3. Metrics 3.1. Metrics in the Block and File dashboard You can navigate to the Block and File dashboard in the OpenShift Web Console as follows: Click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. Click the Block and File tab. The following cards on the Block and File dashboard provide the metrics based on deployment mode (internal or external): Details card The Details card shows the following: Service name Cluster name The name of the Provider on which the system runs, for example, AWS , VSphere , None for Bare metal. Mode (deployment mode as either Internal or External) OpenShift Data Foundation operator version In-transit encryption (shows whether the encryption is enabled or disabled) Storage Efficiency card This card shows the compression ratio that represents a compressible data effectiveness metric, which includes all the compression-enabled pools. This card also shows the savings metric that represents the actual disk capacity saved, which includes all the compression-enabled pools and associated replicas. Inventory card The Inventory card shows the total number of active nodes, disks, pools, storage classes, PVCs, and deployments backed by OpenShift Data Foundation provisioner. Note For external mode, the number of nodes will be 0 by default as there are no dedicated nodes for OpenShift Data Foundation. Status card This card shows whether the cluster is up and running without any errors or is experiencing some issues. For internal mode, Data Resiliency indicates the status of data re-balancing in Ceph across the replicas. When the internal mode cluster is in a warning or error state, the Alerts section is shown along with the relevant alerts. For external mode, Data Resiliency and alerts are not displayed Raw Capacity card This card shows the total raw storage capacity, which includes replication on the cluster. Used legend indicates space used raw storage capacity on the cluster Available legend indicates the available raw storage capacity on the cluster Note This card is not applicable for external mode clusters. Requested Capacity This card shows the actual amount of non-replicated data stored in the cluster and its distribution. You can choose between Projects, Storage Classes, Pods, and Peristent Volume Claims from the drop-down list on the top of the card. You need to select a namespace for the Persistent Volume Claims option. These options are for filtering the data shown in the graph. The graph displays the requested capacity for only the top five entities based on usage. The aggregate requested capacity of the remaining entities is displayed as Other. Option Display Projects The aggregated capacity of each project which is using the OpenShift Data Foundation and how much is being used. Storage Classes The aggregate capacity which is based on the OpenShift Data Foundation based storage classes. Pods All the pods that are trying to use the PVC that are backed by OpenShift Data Foundation provisioner. PVCs All the PVCs in the namespace that you selected from the dropdown list and that are mounted on to an active pod. PVCs that are not attached to pods are not included. For external mode, see the Capacity breakdown card . Capacity breakdown card This card is only applicable for external mode clusters. In this card, you can view a graphic breakdown of capacity per project, storage classes, and pods. You can choose between Projects, Storage Classes, and Pods from the drop-down menu on the top of the card. These options are for filtering the data shown in the graph. The graph displays the used capacity for only the top five entities based on usage. The aggregate usage of the remaining entities is displayed as Other . Utilization card The card shows used capacity, input/output operations per second, latency, throughput, and recovery information for the internal mode cluster. For external mode, this card shows only the used and requested capacity details for that cluster. Activity card This card shows the current and the past activities of the OpenShift Data Foundation cluster. The card is separated into two sections: Ongoing : Displays the progress of ongoing activities related to rebuilding of data resiliency and upgrading of OpenShift Data Foundation operator. Recent Events : Displays the list of events that happened in the openshift-storage namespace. 3.2. Metrics in the Object dashboard You can navigate to the Object dashboard in the OpenShift Web Console as follows: Click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. Click the Object tab. The following metrics are available in the Object dashboard: Details card This card shows the following information: Service Name : The Multicloud Object Gateway (MCG) service name. System Name : The Multicloud Object Gateway and RADOS Object Gateway system names. The Multicloud Object Gateway system name is also a hyperlink to the MCG management user interface. Provider : The name of the provider on which the system runs (example: AWS , VSphere , None for Baremetal) Version : OpenShift Data Foundation operator version. Storage Efficiency card In this card you can view how the MCG optimizes the consumption of the storage backend resources through deduplication and compression and provides you with a calculated efficiency ratio (application data vs logical data) and an estimated savings figure (how many bytes the MCG did not send to the storage provider) based on capacity of bare metal and cloud based storage and egress of cloud based storage. Buckets card Buckets are containers maintained by the MCG and RADOS Object Gateway to store data on behalf of the applications. These buckets are created and accessed through object bucket claims (OBCs). A specific policy can be applied to a bucket to customize data placement, data spill-over, data resiliency, capacity quotas, and so on. In this card, information about object buckets (OB) and object bucket claims (OBCs) is shown separately. OB includes all the buckets that are created using S3 or the user interface (UI) and OBC includes all the buckets created using YAMLs or the command line interface (CLI). The number displayed on the left of the bucket type is the total count of OBs or OBCs. The number displayed on the right shows the error count and is visible only when the error count is greater than zero. You can click on the number to see the list of buckets that has the warning or error status. Resource Providers card This card displays a list of all Multicloud Object Gateway and RADOS Object Gateway resources that are currently in use. Those resources are used to store data according to the buckets policies and can be a cloud-based resource or a bare metal resource. Status card This card shows whether the system and its services are running without any issues. When the system is in a warning or error state, the alerts section is shown and the relevant alerts are displayed there. Click the alert links beside each alert for more information about the issue. For information about health checks, see Cluster health . If multiple object storage services are available in the cluster, click the service type (such as Object Service or Data Resiliency ) to see the state of the individual services. Data resiliency in the status card indicates if there is any resiliency issue regarding the data stored through the Multicloud Object Gateway and RADOS Object Gateway. Capacity breakdown card In this card you can visualize how applications consume the object storage through the Multicloud Object Gateway and RADOS Object Gateway. You can use the Service Type drop-down to view the capacity breakdown for the Multicloud Gateway and Object Gateway separately. When viewing the Multicloud Object Gateway, you can use the Break By drop-down to filter the results in the graph by either Projects or Bucket Class . Performance card In this card, you can view the performance of the Multicloud Object Gateway or RADOS Object Gateway. Use the Service Type drop-down to choose which you would like to view. For Multicloud Object Gateway accounts, you can view the I/O operations and logical used capacity. For providers, you can view I/O operation, physical and logical usage, and egress. The following tables explain the different metrics that you can view based on your selection from the drop-down menus on the top of the card: Table 3.1. Indicators for Multicloud Object Gateway Consumer types Metrics Chart display Accounts I/O operations Displays read and write I/O operations for the top five consumers. The total reads and writes of all the consumers is displayed at the bottom. This information helps you monitor the throughput demand (IOPS) per application or account. Accounts Logical Used Capacity Displays total logical usage of each account for the top five consumers. This helps you monitor the throughput demand per application or account. Providers I/O operations Displays the count of I/O operations generated by the MCG when accessing the storage backend hosted by the provider. This helps you understand the traffic in the cloud so that you can improve resource allocation according to the I/O pattern, thereby optimizing the cost. Providers Physical vs Logical usage Displays the data consumption in the system by comparing the physical usage with the logical usage per provider. This helps you control the storage resources and devise a placement strategy in line with your usage characteristics and your performance requirements while potentially optimizing your costs. Providers Egress The amount of data the MCG retrieves from each provider (read bandwidth originated with the applications). This helps you understand the traffic in the cloud to improve resource allocation according to the egress pattern, thereby optimizing the cost. For the RADOS Object Gateway, you can use the Metric drop-down to view the Latency or Bandwidth . Latency : Provides a visual indication of the average GET/PUT latency imbalance across RADOS Object Gateway instances. Bandwidth : Provides a visual indication of the sum of GET/PUT bandwidth across RADOS Object Gateway instances. Activity card This card displays what activities are happening or have recently happened in the OpenShift Data Foundation cluster. The card is separated into two sections: Ongoing : Displays the progress of ongoing activities related to rebuilding of data resiliency and upgrading of OpenShift Data Foundation operator. Recent Events : Displays the list of events that happened in the openshift-storage namespace. 3.3. Pool metrics The Pool metrics dashboard provides information to ensure efficient data consumption, and how to enable or disable compression if less effective. Viewing pool metrics To view the pool list: Click Storage Data Foundation . In the Storage systems tab, select the storage system and then click BlockPools . When you click on a pool name, the following cards on each Pool dashboard is displayed along with the metrics based on deployment mode (internal or external): Details card The Details card shows the following: Pool Name Volume type Replicas Status card This card shows whether the pool is up and running without any errors or is experiencing some issues. Mirroring card When the mirroring option is enabled, this card shows the mirroring status, image health, and last checked time-stamp. The mirroring metrics are displayed when cluster level mirroring is enabled. The metrics help to prevent disaster recovery failures and notify of any discrepancies so that the data is kept intact. The mirroring card shows high-level information such as: Mirroring state as either enabled or disabled for the particular pool. Status of all images under the pool as replicating successfully or not. Percentage of images that are replicating and not replicating. Inventory card The Inventory card shows the number of storage classes and Persistent Volume Claims. Compression card This card shows the compression status as enabled or disabled. It also displays the storage efficiency details as follows: Compression eligibility that indicates what portion of written compression-eligible data is compressible (per ceph parameters) Compression ratio of compression-eligible data Compression savings provides the total savings (including replicas) of compression-eligible data For information on how to enable or disable compression for an existing pool, see Updating an existing pool . Raw Capacity card This card shows the total raw storage capacity which includes replication on the cluster. Used legend indicates storage capacity used by the pool Available legend indicates the available raw storage capacity on the cluster Performance card In this card, you can view the usage of I/O operations and throughput demand per application or account. The graph indicates the average latency or bandwidth across the instances. 3.4. Network File System metrics The Network File System (NFS) metrics dashboard provides enhanced observability for NFS mounts such as the following: Mount point for any exported NFS shares Number of client mounts A breakdown statistics of the clients that are connected to help determine internal versus the external client mounts Grace period status of the Ganesha server Health statuses of the Ganesha server Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. Ensure that NFS is enabled. Procedure You can navigate to the Network file system dashboard in the OpenShift Web Console as follows: Click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. Click the Network file system tab. This tab is available only when NFS is enabled. Note When you enable or disable NFS from the command-line interface, you must perform a hard refresh to display or hide the Network file system tab in the dashboard. The following NFS metrics are displayed: Status Card This card shows the status of the server based on the total number of active worker threads. Non-zero threads specify healthy status. Throughput Card This card shows the throughput of the server, which is the summation of the total request bytes and total response bytes for both read and write operations of the server. Top client Card This card shows the throughput of clients, which is the summation of the total of the response bytes sent by a client and the total request bytes by a client for both read and write operations. It shows the top three of such clients. 3.5. Enabling metadata on RBD and CephFS volumes You can set the persistent volume claim (PVC), persistent volume (PV), and Namespace names in the RADOS block device (RBD) and CephFS volumes for monitoring purposes. This enables you to read the RBD and CephFS metadata to identify the mapping between the OpenShift Container Platform and RBD and CephFS volumes. To enable RADOS block device (RBD) and CephFS volume metadata feature, you need to set the CSI_ENABLE_METADATA variable in the rook-ceph-operator-config configmap . By default, this feature is disabled. If you enable the feature after upgrading from a version, the existing PVCs will not contain the metadata. Also, when you enable the metadata feature, the PVCs that were created before enabling will not have the metadata. Prerequisites Ensure to install ocs_operator and create a storagecluster for the operator. Ensure that the storagecluster is in Ready state. Procedure Edit the rook-ceph operator ConfigMap to mark CSI_ENABLE_METADATA to true . Wait for the respective CSI CephFS plugin provisioner pods and CSI RBD plugin pods to reach the Running state. Note Ensure that the setmetadata variable is automatically set after the metadata feature is enabled. This variable should not be available when the metadata feature is disabled. Verification steps To verify the metadata for RBD PVC: Create a PVC. Check the status of the PVC. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. There are four metadata on this image: To verify the metadata for RBD clones: Create a clone. Check the status of the clone. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata for RBD Snapshots: Create a snapshot. Check the status of the snapshot. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. Verify the metadata for RBD Restore: Restore a volume snapshot. Check the status of the restored volume snapshot. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata for CephFS PVC: Create a PVC. Check the status of the PVC. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata for CephFS clone: Create a clone. Check the status of the clone. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata for CephFS volume snapshot: Create a volume snapshot. Check the status of the volume snapshot. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. To verify the metadata of the CephFS Restore: Restore a volume snapshot. Check the status of the restored volume snapshot. Verify the metadata in the Red Hat Ceph Storage command-line interface (CLI). For information about how to access the Red Hat Ceph Storage CLI, see the How to access Red Hat Ceph Storage CLI in Red Hat OpenShift Data Foundation environment article. | [
"oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 57m Ready 2022-08-30T06:52:58Z 4.12.0",
"oc patch cm rook-ceph-operator-config -n openshift-storage -p USD'data:\\n \"CSI_ENABLE_METADATA\": \"true\"' configmap/rook-ceph-operator-config patched",
"oc get pods | grep csi csi-cephfsplugin-b8d6c 2/2 Running 0 56m csi-cephfsplugin-bnbg9 2/2 Running 0 56m csi-cephfsplugin-kqdw4 2/2 Running 0 56m csi-cephfsplugin-provisioner-7dcd78bb9b-q6dxb 5/5 Running 0 56m csi-cephfsplugin-provisioner-7dcd78bb9b-zc4q5 5/5 Running 0 56m csi-rbdplugin-776dl 3/3 Running 0 56m csi-rbdplugin-ffl52 3/3 Running 0 56m csi-rbdplugin-jx9mz 3/3 Running 0 56m csi-rbdplugin-provisioner-5f6d766b6c-694fx 6/6 Running 0 56m csi-rbdplugin-provisioner-5f6d766b6c-vzv45 6/6 Running 0 56m",
"cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-rbd EOF",
"oc get pvc | grep rbd-pvc rbd-pvc Bound pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 1Gi RWO ocs-storagecluster-ceph-rbd 32s",
"[sh-4.x]USD rbd ls ocs-storagecluster-cephblockpool csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]USD rbd image-meta ls ocs-storagecluster-cephblockpool/csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012",
"Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/pv/name pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 csi.storage.k8s.io/pvc/name rbd-pvc csi.storage.k8s.io/pvc/namespace openshift-storage",
"cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc-clone spec: storageClassName: ocs-storagecluster-ceph-rbd dataSource: name: rbd-pvc kind: PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 1Gi EOF",
"oc get pvc | grep rbd-pvc rbd-pvc Bound pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 1Gi RWO ocs-storagecluster-ceph-rbd 15m rbd-pvc-clone Bound pvc-0d72afda-f433-4d46-a7f1-a5fcb3d766e0 1Gi RWO ocs-storagecluster-ceph-rbd 52s",
"[sh-4.x]USD rbd ls ocs-storagecluster-cephblockpool csi-vol-063b982d-2845-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012-temp csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]USD rbd image-meta ls ocs-storagecluster-cephblockpool/csi-vol-063b982d-2845-11ed-94bd-0a580a830012 There are 4 metadata on this image: Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/pv/name pvc-0d72afda-f433-4d46-a7f1-a5fcb3d766e0 csi.storage.k8s.io/pvc/name rbd-pvc-clone csi.storage.k8s.io/pvc/namespace openshift-storage",
"cat <<EOF | oc create -f - apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: rbd-pvc-snapshot spec: volumeSnapshotClassName: ocs-storagecluster-rbdplugin-snapclass source: persistentVolumeClaimName: rbd-pvc EOF volumesnapshot.snapshot.storage.k8s.io/rbd-pvc-snapshot created",
"oc get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE rbd-pvc-snapshot true rbd-pvc 1Gi ocs-storagecluster-rbdplugin-snapclass snapcontent-b992b782-7174-4101-8fe3-e6e478eb2c8f 17s 18s",
"[sh-4.x]USD rbd ls ocs-storagecluster-cephblockpool csi-snap-a1e24408-2848-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012-temp csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]USD rbd image-meta ls ocs-storagecluster-cephblockpool/csi-snap-a1e24408-2848-11ed-94bd-0a580a830012 There are 4 metadata on this image: Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/volumesnapshot/name rbd-pvc-snapshot csi.storage.k8s.io/volumesnapshot/namespace openshift-storage csi.storage.k8s.io/volumesnapshotcontent/name snapcontent-b992b782-7174-4101-8fe3-e6e478eb2c8f",
"cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc-restore spec: storageClassName: ocs-storagecluster-ceph-rbd dataSource: name: rbd-pvc-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 1Gi EOF persistentvolumeclaim/rbd-pvc-restore created",
"oc get pvc | grep rbd db-noobaa-db-pg-0 Bound pvc-615e2027-78cd-4ea2-a341-fdedd50c5208 50Gi RWO ocs-storagecluster-ceph-rbd 51m rbd-pvc Bound pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 1Gi RWO ocs-storagecluster-ceph-rbd 47m rbd-pvc-clone Bound pvc-0d72afda-f433-4d46-a7f1-a5fcb3d766e0 1Gi RWO ocs-storagecluster-ceph-rbd 32m rbd-pvc-restore Bound pvc-f900e19b-3924-485c-bb47-01b84c559034 1Gi RWO ocs-storagecluster-ceph-rbd 111s",
"[sh-4.x]USD rbd ls ocs-storagecluster-cephblockpool csi-snap-a1e24408-2848-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012-temp csi-vol-5f6e0737-2849-11ed-94bd-0a580a830012 csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]USD rbd image-meta ls ocs-storagecluster-cephblockpool/csi-vol-5f6e0737-2849-11ed-94bd-0a580a830012 There are 4 metadata on this image: Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/pv/name pvc-f900e19b-3924-485c-bb47-01b84c559034 csi.storage.k8s.io/pvc/name rbd-pvc-restore csi.storage.k8s.io/pvc/namespace openshift-storage",
"cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-cephfs EOF",
"get pvc | grep cephfs cephfs-pvc Bound pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9 1Gi RWO ocs-storagecluster-cephfs 11s",
"ceph fs volume ls [ { \"name\": \"ocs-storagecluster-cephfilesystem\" } ] ceph fs subvolumegroup ls ocs-storagecluster-cephfilesystem [ { \"name\": \"csi\" } ] ceph fs subvolume ls ocs-storagecluster-cephfilesystem --group_name csi [ { \"name\": \"csi-vol-25266061-284c-11ed-95e0-0a580a810215\" } ] ceph fs subvolume metadata ls ocs-storagecluster-cephfilesystem csi-vol-25266061-284c-11ed-95e0-0a580a810215 --group_name=csi --format=json { \"csi.ceph.com/cluster/name\": \"6cd7a18d-7363-4830-ad5c-f7b96927f026\", \"csi.storage.k8s.io/pv/name\": \"pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9\", \"csi.storage.k8s.io/pvc/name\": \"cephfs-pvc\", \"csi.storage.k8s.io/pvc/namespace\": \"openshift-storage\" }",
"cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-clone spec: storageClassName: ocs-storagecluster-cephfs dataSource: name: cephfs-pvc kind: PersistentVolumeClaim accessModes: - ReadWriteMany resources: requests: storage: 1Gi EOF persistentvolumeclaim/cephfs-pvc-clone created",
"oc get pvc | grep cephfs cephfs-pvc Bound pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9 1Gi RWO ocs-storagecluster-cephfs 9m5s cephfs-pvc-clone Bound pvc-3d4c4e78-f7d5-456a-aa6e-4da4a05ca4ce 1Gi RWX ocs-storagecluster-cephfs 20s",
"[rook@rook-ceph-tools-c99fd8dfc-6sdbg /]USD ceph fs subvolume ls ocs-storagecluster-cephfilesystem --group_name csi [ { \"name\": \"csi-vol-5ea23eb0-284d-11ed-95e0-0a580a810215\" }, { \"name\": \"csi-vol-25266061-284c-11ed-95e0-0a580a810215\" } ] [rook@rook-ceph-tools-c99fd8dfc-6sdbg /]USD ceph fs subvolume metadata ls ocs-storagecluster-cephfilesystem csi-vol-5ea23eb0-284d-11ed-95e0-0a580a810215 --group_name=csi --format=json { \"csi.ceph.com/cluster/name\": \"6cd7a18d-7363-4830-ad5c-f7b96927f026\", \"csi.storage.k8s.io/pv/name\": \"pvc-3d4c4e78-f7d5-456a-aa6e-4da4a05ca4ce\", \"csi.storage.k8s.io/pvc/name\": \"cephfs-pvc-clone\", \"csi.storage.k8s.io/pvc/namespace\": \"openshift-storage\" }",
"cat <<EOF | oc create -f - apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: cephfs-pvc-snapshot spec: volumeSnapshotClassName: ocs-storagecluster-cephfsplugin-snapclass source: persistentVolumeClaimName: cephfs-pvc EOF volumesnapshot.snapshot.storage.k8s.io/cephfs-pvc-snapshot created",
"oc get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE cephfs-pvc-snapshot true cephfs-pvc 1Gi ocs-storagecluster-cephfsplugin-snapclass snapcontent-f0f17463-d13b-4e13-b44e-6340bbb3bee0 9s 9s",
"ceph fs subvolume snapshot ls ocs-storagecluster-cephfilesystem csi-vol-25266061-284c-11ed-95e0-0a580a810215 --group_name csi [ { \"name\": \"csi-snap-06336f4e-284e-11ed-95e0-0a580a810215\" } ] ceph fs subvolume snapshot metadata ls ocs-storagecluster-cephfilesystem csi-vol-25266061-284c-11ed-95e0-0a580a810215 csi-snap-06336f4e-284e-11ed-95e0-0a580a810215 --group_name=csi --format=json { \"csi.ceph.com/cluster/name\": \"6cd7a18d-7363-4830-ad5c-f7b96927f026\", \"csi.storage.k8s.io/volumesnapshot/name\": \"cephfs-pvc-snapshot\", \"csi.storage.k8s.io/volumesnapshot/namespace\": \"openshift-storage\", \"csi.storage.k8s.io/volumesnapshotcontent/name\": \"snapcontent-f0f17463-d13b-4e13-b44e-6340bbb3bee0\" }",
"cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-restore spec: storageClassName: ocs-storagecluster-cephfs dataSource: name: cephfs-pvc-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteMany resources: requests: storage: 1Gi EOF persistentvolumeclaim/cephfs-pvc-restore created",
"oc get pvc | grep cephfs cephfs-pvc Bound pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9 1Gi RWO ocs-storagecluster-cephfs 29m cephfs-pvc-clone Bound pvc-3d4c4e78-f7d5-456a-aa6e-4da4a05ca4ce 1Gi RWX ocs-storagecluster-cephfs 20m cephfs-pvc-restore Bound pvc-43d55ea1-95c0-42c8-8616-4ee70b504445 1Gi RWX ocs-storagecluster-cephfs 21s",
"ceph fs subvolume ls ocs-storagecluster-cephfilesystem --group_name csi [ { \"name\": \"csi-vol-3536db13-2850-11ed-95e0-0a580a810215\" }, { \"name\": \"csi-vol-5ea23eb0-284d-11ed-95e0-0a580a810215\" }, { \"name\": \"csi-vol-25266061-284c-11ed-95e0-0a580a810215\" } ] ceph fs subvolume metadata ls ocs-storagecluster-cephfilesystem csi-vol-3536db13-2850-11ed-95e0-0a580a810215 --group_name=csi --format=json { \"csi.ceph.com/cluster/name\": \"6cd7a18d-7363-4830-ad5c-f7b96927f026\", \"csi.storage.k8s.io/pv/name\": \"pvc-43d55ea1-95c0-42c8-8616-4ee70b504445\", \"csi.storage.k8s.io/pvc/name\": \"cephfs-pvc-restore\", \"csi.storage.k8s.io/pvc/namespace\": \"openshift-storage\" }"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/monitoring_openshift_data_foundation/metrics |
Chapter 4. Testing and troubleshooting autoscaling | Chapter 4. Testing and troubleshooting autoscaling Use the Orchestration service (heat) to automatically scale instances up and down based on threshold definitions. To troubleshoot your environment, you can look for errors in the log files and history records. 4.1. Testing automatic scaling up of instances You can use the Orchestration service (heat) to scale instances automatically based on the cpu_alarm_high threshold definition. When the CPU use reaches a value defined in the threshold parameter, another instance starts up to balance the load. The threshold value in the template.yaml file is set to 80%. Procedure Log in to the host environment as the stack user. For standalone environments set the OS_CLOUD environment variable: [stack@standalone ~]USD export OS_CLOUD=standalone For director environments source the stackrc file: [stack@undercloud ~]USD source ~/stackrc Log in to the instance: USD ssh -i ~/mykey.pem [email protected] Run multiple dd commands to generate the load: [instance ~]USD sudo dd if=/dev/zero of=/dev/null & [instance ~]USD sudo dd if=/dev/zero of=/dev/null & [instance ~]USD sudo dd if=/dev/zero of=/dev/null & Exit from the running instance and return to the host. After you run the dd commands, you can expect to have 100% CPU use in the instance. Verify that the alarm has been triggered: USD openstack alarm list +--------------------------------------+--------------------------------------------+-------------------------------------+-------+----------+---------+ | alarm_id | type | name | state | severity | enabled | +--------------------------------------+--------------------------------------------+-------------------------------------+-------+----------+---------+ | 022f707d-46cc-4d39-a0b2-afd2fc7ab86a | gnocchi_aggregation_by_resources_threshold | example-cpu_alarm_high-odj77qpbld7j | alarm | low | True | | 46ed2c50-e05a-44d8-b6f6-f1ebd83af913 | gnocchi_aggregation_by_resources_threshold | example-cpu_alarm_low-m37jvnm56x2t | ok | low | True | +--------------------------------------+--------------------------------------------+-------------------------------------+-------+----------+---------+ After approximately 60 seconds, Orchestration starts another instance and adds it to the group. To verify that an instance has been created, enter the following command: USD openstack server list +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+---------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+---------------------------------------+ | 477ee1af-096c-477c-9a3f-b95b0e2d4ab5 | ex-3gax-4urpikl5koff-yrxk3zxzfmpf-server-2hde4tp4trnk | ACTIVE | - | Running | internal1=10.10.10.13, 192.168.122.17 | | e1524f65-5be6-49e4-8501-e5e5d812c612 | ex-3gax-5f3a4og5cwn2-png47w3u2vjd-server-vaajhuv4mj3j | ACTIVE | - | Running | internal1=10.10.10.9, 192.168.122.8 | +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+---------------------------------------+ After another short period of time, observe that the Orchestration service has autoscaled to three instances. The configuration is set to a maximum of three instances. Verify there are three instances: USD openstack server list +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+---------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+---------------------------------------+ | 477ee1af-096c-477c-9a3f-b95b0e2d4ab5 | ex-3gax-4urpikl5koff-yrxk3zxzfmpf-server-2hde4tp4trnk | ACTIVE | - | Running | internal1=10.10.10.13, 192.168.122.17 | | e1524f65-5be6-49e4-8501-e5e5d812c612 | ex-3gax-5f3a4og5cwn2-png47w3u2vjd-server-vaajhuv4mj3j | ACTIVE | - | Running | internal1=10.10.10.9, 192.168.122.8 | | 6c88179e-c368-453d-a01a-555eae8cd77a | ex-3gax-fvxz3tr63j4o-36fhftuja3bw-server-rhl4sqkjuy5p | ACTIVE | - | Running | internal1=10.10.10.5, 192.168.122.5 | +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+---------------------------------------+ 4.2. Testing automatic scaling down of instances You can use the Orchestration service (heat) to automatically scale down instances based on the cpu_alarm_low threshold. In this example, the instances are scaled down when CPU use is below 5%. Procedure From within the workload instance, terminate the running dd processes and observe Orchestration begin to scale the instances back down. Log in to the host environment as the stack user. For standalone environments set the OS_CLOUD environment variable: [stack@standalone ~]USD export OS_CLOUD=standalone For director environments source the stackrc file: [stack@undercloud ~]USD source ~/stackrc When you stop the dd processes, this triggers the cpu_alarm_low event alarm. As a result, Orchestration begins to automatically scale down and remove the instances. Verify that the corresponding alarm has triggered: After a few minutes, Orchestration continually reduce the number of instances to the minimum value defined in the min_size parameter of the scaleup_group definition. In this scenario, the min_size parameter is set to 1 . 4.3. Troubleshooting for autoscaling If your environment is not working properly, you can look for errors in the log files and history records. Procedure Log in to the host environment as the stack user. For standalone environments set the OS_CLOUD environment variable: [stack@standalone ~]USD export OS_CLOUD=standalone For director environments source the stackrc file: [stack@undercloud ~]USD source ~/stackrc To retrieve information on state transitions, list the stack event records: USD openstack stack event list example 2017-03-06 11:12:43Z [example]: CREATE_IN_PROGRESS Stack CREATE started 2017-03-06 11:12:43Z [example.scaleup_group]: CREATE_IN_PROGRESS state changed 2017-03-06 11:13:04Z [example.scaleup_group]: CREATE_COMPLETE state changed 2017-03-06 11:13:04Z [example.scaledown_policy]: CREATE_IN_PROGRESS state changed 2017-03-06 11:13:05Z [example.scaleup_policy]: CREATE_IN_PROGRESS state changed 2017-03-06 11:13:05Z [example.scaledown_policy]: CREATE_COMPLETE state changed 2017-03-06 11:13:05Z [example.scaleup_policy]: CREATE_COMPLETE state changed 2017-03-06 11:13:05Z [example.cpu_alarm_low]: CREATE_IN_PROGRESS state changed 2017-03-06 11:13:05Z [example.cpu_alarm_high]: CREATE_IN_PROGRESS state changed 2017-03-06 11:13:06Z [example.cpu_alarm_low]: CREATE_COMPLETE state changed 2017-03-06 11:13:07Z [example.cpu_alarm_high]: CREATE_COMPLETE state changed 2017-03-06 11:13:07Z [example]: CREATE_COMPLETE Stack CREATE completed successfully 2017-03-06 11:19:34Z [example.scaleup_policy]: SIGNAL_COMPLETE alarm state changed from alarm to alarm (Remaining as alarm due to 1 samples outside threshold, most recent: 95.4080102993) 2017-03-06 11:25:43Z [example.scaleup_policy]: SIGNAL_COMPLETE alarm state changed from alarm to alarm (Remaining as alarm due to 1 samples outside threshold, most recent: 95.8869217299) 2017-03-06 11:33:25Z [example.scaledown_policy]: SIGNAL_COMPLETE alarm state changed from ok to alarm (Transition to alarm due to 1 samples outside threshold, most recent: 2.73931707966) 2017-03-06 11:39:15Z [example.scaledown_policy]: SIGNAL_COMPLETE alarm state changed from alarm to alarm (Remaining as alarm due to 1 samples outside threshold, most recent: 2.78110858552) Read the alarm history log: USD openstack alarm-history show 022f707d-46cc-4d39-a0b2-afd2fc7ab86a +----------------------------+------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------+ | timestamp | type | detail | event_id | +----------------------------+------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------+ | 2017-03-06T11:32:35.510000 | state transition | {"transition_reason": "Transition to ok due to 1 samples inside threshold, most recent: | 25e0e70b-3eda-466e-abac-42d9cf67e704 | | | | 2.73931707966", "state": "ok"} | | | 2017-03-06T11:17:35.403000 | state transition | {"transition_reason": "Transition to alarm due to 1 samples outside threshold, most recent: | 8322f62c-0d0a-4dc0-9279-435510f81039 | | | | 95.0964497325", "state": "alarm"} | | | 2017-03-06T11:15:35.723000 | state transition | {"transition_reason": "Transition to ok due to 1 samples inside threshold, most recent: | 1503bd81-7eba-474e-b74e-ded8a7b630a1 | | | | 3.59330523447", "state": "ok"} | | | 2017-03-06T11:13:06.413000 | creation | {"alarm_actions": ["trust+http://fca6e27e3d524ed68abdc0fd576aa848:[email protected]:8004/v1/fd | 224f15c0-b6f1-4690-9a22-0c1d236e65f6 | | | | 1c345135be4ee587fef424c241719d/stacks/example/d9ef59ed-b8f8-4e90-bd9b- | | | | | ae87e73ef6e2/resources/scaleup_policy/signal"], "user_id": "a85f83b7f7784025b6acdc06ef0a8fd8", | | | | | "name": "example-cpu_alarm_high-odj77qpbld7j", "state": "insufficient data", "timestamp": | | | | | "2017-03-06T11:13:06.413455", "description": "Scale up if CPU > 80%", "enabled": true, | | | | | "state_timestamp": "2017-03-06T11:13:06.413455", "rule": {"evaluation_periods": 1, "metric": | | | | | "cpu_util", "aggregation_method": "mean", "granularity": 300, "threshold": 80.0, "query": "{\"=\": | | | | | {\"server_group\": \"d9ef59ed-b8f8-4e90-bd9b-ae87e73ef6e2\"}}", "comparison_operator": "gt", | | | | | "resource_type": "instance"}, "alarm_id": "022f707d-46cc-4d39-a0b2-afd2fc7ab86a", | | | | | "time_constraints": [], "insufficient_data_actions": null, "repeat_actions": true, "ok_actions": | | | | | null, "project_id": "fd1c345135be4ee587fef424c241719d", "type": | | | | | "gnocchi_aggregation_by_resources_threshold", "severity": "low"} | | +----------------------------+------------------+-----------------------------------------------------------------------------------------------------+------------------------------------- To view the records of scale-out or scale-down operations that heat collects for the existing stack, you can use the awk command to parse the heat-engine.log : USD awk '/Stack UPDATE started/,/Stack CREATE completed successfully/ {print USD0}' /var/log/containers/heat/heat-engine.log To view aodh-related information, examine the evaluator.log : USD grep -i alarm /var/log/containers/aodh/evaluator.log | grep -i transition 4.4. Using CPU telemetry values for autoscaling threshold when using rate:mean aggregration When using the OS::Heat::Autoscaling heat orchestration template (HOT) and setting a threshold value for CPU, the value is expressed in nanoseconds of CPU time which is a dynamic value based on the number of virtual CPUs allocated to the instance workload. In this reference guide we'll explore how to calculate and express the CPU nanosecond value as a percentage when using the Gnocchi rate:mean aggregration method. 4.4.1. Calculating CPU telemetry values as a percentage CPU telemetry is stored in Gnocchi (OpenStack time-series data store) as CPU utilization in nanoseconds. When using CPU telemetry to define autoscaling thresholds it is useful to express the values as a percentage of CPU utilization since that is more natural when defining the threshold values. When defining the scaling policies used as part of an autoscaling group, we can take our desired threshold defined as a percentage and calculate the required threshold value in nanoseconds which is used in the policy definitions. Value (ns) Granularity (s) Percentage 60000000000 60 100 54000000000 60 90 48000000000 60 80 42000000000 60 70 36000000000 60 60 30000000000 60 50 24000000000 60 40 18000000000 60 30 12000000000 60 20 6000000000 60 10 4.4.2. Displaying instance workload vCPU as a percentage You can display the gnocchi-stored CPU telemetry data as a percentage rather than the nanosecond values for instances by using the openstack metric aggregates command. Prerequisites Create a heat stack using the autoscaling group resource that results in an instance workload. Procedure Login to your OpenStack environment as the cloud adminstrator. Retrieve the ID of the autoscaling group heat stack: USD openstack stack show vnf -c id -c stack_status +--------------+--------------------------------------+ | Field | Value | +--------------+--------------------------------------+ | id | e0a15cee-34d1-418a-ac79-74ad07585730 | | stack_status | CREATE_COMPLETE | +--------------+--------------------------------------+ Set the value of the stack ID to an environment variable: USD export STACK_ID=USD(openstack stack show vnf -c id -f value) Return the metrics as an aggregate by resource type instance (server ID) with the value calculated as a percentage. The aggregate is returned as a value of nanoseconds of CPU time. We divide that number by 1000000000 to get the value in seconds. We then divide the value by our granularity, which in this example is 60 seconds. That value is then converted to a percentage by multiplying by 100. Finally, we divide the total value by the number of vCPU provided by the flavor assigned to the instance, in this example a value of 2 vCPU, providing us a value expressed as a percentage of CPU time: USD openstack metric aggregates --resource-type instance --sort-column timestamp --sort-descending '(/ (* (/ (/ (metric cpu rate:mean) 1000000000) 60) 100) 2)' server_group="USDSTACK_ID" +----------------------------------------------------+---------------------------+-------------+--------------------+ | name | timestamp | granularity | value | +----------------------------------------------------+---------------------------+-------------+--------------------+ | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T21:03:00+00:00 | 60.0 | 3.158333333333333 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T21:02:00+00:00 | 60.0 | 2.6333333333333333 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T21:02:00+00:00 | 60.0 | 2.533333333333333 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T21:01:00+00:00 | 60.0 | 2.833333333333333 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T21:01:00+00:00 | 60.0 | 3.0833333333333335 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T21:00:00+00:00 | 60.0 | 13.450000000000001 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T21:00:00+00:00 | 60.0 | 2.45 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T21:00:00+00:00 | 60.0 | 2.6166666666666667 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T20:59:00+00:00 | 60.0 | 60.583333333333336 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:59:00+00:00 | 60.0 | 2.35 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T20:59:00+00:00 | 60.0 | 2.525 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T20:58:00+00:00 | 60.0 | 71.35833333333333 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:58:00+00:00 | 60.0 | 3.025 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T20:58:00+00:00 | 60.0 | 9.3 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T20:57:00+00:00 | 60.0 | 66.19166666666668 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:57:00+00:00 | 60.0 | 2.275 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T20:57:00+00:00 | 60.0 | 56.31666666666667 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T20:56:00+00:00 | 60.0 | 59.50833333333333 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:56:00+00:00 | 60.0 | 2.375 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T20:56:00+00:00 | 60.0 | 63.949999999999996 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:55:00+00:00 | 60.0 | 15.558333333333335 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T20:55:00+00:00 | 60.0 | 93.85 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:54:00+00:00 | 60.0 | 59.54999999999999 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T20:54:00+00:00 | 60.0 | 61.23333333333334 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:53:00+00:00 | 60.0 | 74.73333333333333 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:52:00+00:00 | 60.0 | 57.86666666666667 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:51:00+00:00 | 60.0 | 60.416666666666664 | +----------------------------------------------------+---------------------------+-------------+--------------------+ 4.4.3. Retrieving available telemetry for an instance workload Retrieve the available telemetry for an instance workload and express the vCPU utilization as a percentage. Prerequisites Create a heat stack using the autoscaling group resource that results in an instance workload. Procedure Login to your OpenStack environment as the cloud adminstrator. Retrieve the ID of the autoscaling group heat stack: USD openstack stack show vnf -c id -c stack_status +--------------+--------------------------------------+ | Field | Value | +--------------+--------------------------------------+ | id | e0a15cee-34d1-418a-ac79-74ad07585730 | | stack_status | CREATE_COMPLETE | +--------------+--------------------------------------+ Set the value of the stack ID to an environment variable: USD export STACK_ID=USD(openstack stack show vnf -c id -f value) Retrieve the ID of the workload instance you want to return data for. We are using the server list long form and filtering for instances that are part of our autoscaling group: USD openstack server list --long --fit-width | grep "metering.server_group='USDSTACK_ID'" | bc1811de-48ed-44c1-ae22-c01f36d6cb02 | vn-xlfb4jb-yhbq6fkk2kec-qsu2lr47zigs-vnf-y27wuo25ce4e | ACTIVE | None | Running | private=192.168.100.139, 192.168.25.179 | fedora36 | d21f1aaa-0077-4313-8a46-266c39b705c1 | m1.small | 692533fe-0912-417e-b706-5d085449db53 | nova | standalone.localdomain | metering.server_group='e0a15cee-34d1-418a-ac79-74ad07585730' | Set the instance ID for one of the returned instance workload names: USD INSTANCE_NAME='vn-xlfb4jb-yhbq6fkk2kec-qsu2lr47zigs-vnf-y27wuo25ce4e' ; export INSTANCE_ID=USD(openstack server list --name USDINSTANCE_NAME -c ID -f value) Verify metrics have been stored for the instance resource ID. If no metrics are available it's possible not enough time has elapsed since the instance was created. If enough time has elapsed, you can check the logs for the data collection service in /var/log/containers/ceilometer/ and logs for the time-series database service gnocchi in /var/log/containers/gnocchi/ : USD openstack metric resource show --column metrics USDINSTANCE_ID +---------+---------------------------------------------------------------------+ | Field | Value | +---------+---------------------------------------------------------------------+ | metrics | compute.instance.booting.time: 57ca241d-764b-4c58-aa32-35760d720b08 | | | cpu: d7767d7f-b10c-4124-8893-679b2e5d2ccd | | | disk.ephemeral.size: 038b11db-0598-4cfd-9f8d-4ba6b725375b | | | disk.root.size: 843f8998-e644-41f6-8635-e7c99e28859e | | | memory.usage: 1e554370-05ac-4107-98d8-9330265db750 | | | memory: fbd50c0e-90fa-4ad9-b0df-f7361ceb4e38 | | | vcpus: 0629743e-6baa-4e22-ae93-512dc16bac85 | +---------+---------------------------------------------------------------------+ Verify there are available measures for the resource metric and note the granularity value as we'll use it when running the openstack metric aggregates command: USD openstack metric measures show --resource-id USDINSTANCE_ID --aggregation rate:mean cpu +---------------------------+-------------+---------------+ | timestamp | granularity | value | +---------------------------+-------------+---------------+ | 2022-11-08T14:12:00+00:00 | 60.0 | 71920000000.0 | | 2022-11-08T14:13:00+00:00 | 60.0 | 88920000000.0 | | 2022-11-08T14:14:00+00:00 | 60.0 | 76130000000.0 | | 2022-11-08T14:15:00+00:00 | 60.0 | 17640000000.0 | | 2022-11-08T14:16:00+00:00 | 60.0 | 3330000000.0 | | 2022-11-08T14:17:00+00:00 | 60.0 | 2450000000.0 | ... Retrieve the number of vCPU cores applied to the workload instance by reviewing the configured flavor for the instance workload: USD openstack server show USDINSTANCE_ID -cflavor -f value m1.small (692533fe-0912-417e-b706-5d085449db53) USD openstack flavor show 692533fe-0912-417e-b706-5d085449db53 -c vcpus -f value 2 Return the metrics as an aggregate by resource type instance (server ID) with the value calculated as a percentage. The aggregate is returned as a value of nanoseconds of CPU time. We divide that number by 1000000000 to get the value in seconds. We then divide the value by our granularity, which in this example is 60 seconds (as previously retrieved with openstack metric measures show command). That value is then converted to a percentage by multiplying by 100. Finally, we divide the total value by the number of vCPU provided by the flavor assigned to the instance, in this example a value of 2 vCPU, providing us a value expressed as a percentage of CPU time: USD openstack metric aggregates --resource-type instance --sort-column timestamp --sort-descending '(/ (* (/ (/ (metric cpu rate:mean) 1000000000) 60) 100) 2)' id=USDINSTANCE_ID +----------------------------------------------------+---------------------------+-------------+--------------------+ | name | timestamp | granularity | value | +----------------------------------------------------+---------------------------+-------------+--------------------+ | bc1811de-48ed-44c1-ae22-c01f36d6cb02/cpu/rate:mean | 2022-11-08T14:26:00+00:00 | 60.0 | 2.45 | | bc1811de-48ed-44c1-ae22-c01f36d6cb02/cpu/rate:mean | 2022-11-08T14:25:00+00:00 | 60.0 | 11.075 | | bc1811de-48ed-44c1-ae22-c01f36d6cb02/cpu/rate:mean | 2022-11-08T14:24:00+00:00 | 60.0 | 61.3 | | bc1811de-48ed-44c1-ae22-c01f36d6cb02/cpu/rate:mean | 2022-11-08T14:23:00+00:00 | 60.0 | 74.78333333333332 | | bc1811de-48ed-44c1-ae22-c01f36d6cb02/cpu/rate:mean | 2022-11-08T14:22:00+00:00 | 60.0 | 55.383333333333326 | ... | [
"[stack@standalone ~]USD export OS_CLOUD=standalone",
"[stack@undercloud ~]USD source ~/stackrc",
"ssh -i ~/mykey.pem [email protected]",
"[instance ~]USD sudo dd if=/dev/zero of=/dev/null & [instance ~]USD sudo dd if=/dev/zero of=/dev/null & [instance ~]USD sudo dd if=/dev/zero of=/dev/null &",
"openstack alarm list +--------------------------------------+--------------------------------------------+-------------------------------------+-------+----------+---------+ | alarm_id | type | name | state | severity | enabled | +--------------------------------------+--------------------------------------------+-------------------------------------+-------+----------+---------+ | 022f707d-46cc-4d39-a0b2-afd2fc7ab86a | gnocchi_aggregation_by_resources_threshold | example-cpu_alarm_high-odj77qpbld7j | alarm | low | True | | 46ed2c50-e05a-44d8-b6f6-f1ebd83af913 | gnocchi_aggregation_by_resources_threshold | example-cpu_alarm_low-m37jvnm56x2t | ok | low | True | +--------------------------------------+--------------------------------------------+-------------------------------------+-------+----------+---------+",
"openstack server list +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+---------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+---------------------------------------+ | 477ee1af-096c-477c-9a3f-b95b0e2d4ab5 | ex-3gax-4urpikl5koff-yrxk3zxzfmpf-server-2hde4tp4trnk | ACTIVE | - | Running | internal1=10.10.10.13, 192.168.122.17 | | e1524f65-5be6-49e4-8501-e5e5d812c612 | ex-3gax-5f3a4og5cwn2-png47w3u2vjd-server-vaajhuv4mj3j | ACTIVE | - | Running | internal1=10.10.10.9, 192.168.122.8 | +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+---------------------------------------+",
"openstack server list +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+---------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+---------------------------------------+ | 477ee1af-096c-477c-9a3f-b95b0e2d4ab5 | ex-3gax-4urpikl5koff-yrxk3zxzfmpf-server-2hde4tp4trnk | ACTIVE | - | Running | internal1=10.10.10.13, 192.168.122.17 | | e1524f65-5be6-49e4-8501-e5e5d812c612 | ex-3gax-5f3a4og5cwn2-png47w3u2vjd-server-vaajhuv4mj3j | ACTIVE | - | Running | internal1=10.10.10.9, 192.168.122.8 | | 6c88179e-c368-453d-a01a-555eae8cd77a | ex-3gax-fvxz3tr63j4o-36fhftuja3bw-server-rhl4sqkjuy5p | ACTIVE | - | Running | internal1=10.10.10.5, 192.168.122.5 | +--------------------------------------+-------------------------------------------------------+--------+------------+-------------+---------------------------------------+",
"killall dd",
"[stack@standalone ~]USD export OS_CLOUD=standalone",
"[stack@undercloud ~]USD source ~/stackrc",
"openstack alarm list +--------------------------------------+--------------------------------------------+-------------------------------------+-------+----------+---------+ | alarm_id | type | name | state | severity | enabled | +--------------------------------------+--------------------------------------------+-------------------------------------+-------+----------+---------+ | 022f707d-46cc-4d39-a0b2-afd2fc7ab86a | gnocchi_aggregation_by_resources_threshold | example-cpu_alarm_high-odj77qpbld7j | ok | low | True | | 46ed2c50-e05a-44d8-b6f6-f1ebd83af913 | gnocchi_aggregation_by_resources_threshold | example-cpu_alarm_low-m37jvnm56x2t | alarm | low | True | +--------------------------------------+--------------------------------------------+-------------------------------------+-------+----------+---------+",
"[stack@standalone ~]USD export OS_CLOUD=standalone",
"[stack@undercloud ~]USD source ~/stackrc",
"openstack stack event list example 2017-03-06 11:12:43Z [example]: CREATE_IN_PROGRESS Stack CREATE started 2017-03-06 11:12:43Z [example.scaleup_group]: CREATE_IN_PROGRESS state changed 2017-03-06 11:13:04Z [example.scaleup_group]: CREATE_COMPLETE state changed 2017-03-06 11:13:04Z [example.scaledown_policy]: CREATE_IN_PROGRESS state changed 2017-03-06 11:13:05Z [example.scaleup_policy]: CREATE_IN_PROGRESS state changed 2017-03-06 11:13:05Z [example.scaledown_policy]: CREATE_COMPLETE state changed 2017-03-06 11:13:05Z [example.scaleup_policy]: CREATE_COMPLETE state changed 2017-03-06 11:13:05Z [example.cpu_alarm_low]: CREATE_IN_PROGRESS state changed 2017-03-06 11:13:05Z [example.cpu_alarm_high]: CREATE_IN_PROGRESS state changed 2017-03-06 11:13:06Z [example.cpu_alarm_low]: CREATE_COMPLETE state changed 2017-03-06 11:13:07Z [example.cpu_alarm_high]: CREATE_COMPLETE state changed 2017-03-06 11:13:07Z [example]: CREATE_COMPLETE Stack CREATE completed successfully 2017-03-06 11:19:34Z [example.scaleup_policy]: SIGNAL_COMPLETE alarm state changed from alarm to alarm (Remaining as alarm due to 1 samples outside threshold, most recent: 95.4080102993) 2017-03-06 11:25:43Z [example.scaleup_policy]: SIGNAL_COMPLETE alarm state changed from alarm to alarm (Remaining as alarm due to 1 samples outside threshold, most recent: 95.8869217299) 2017-03-06 11:33:25Z [example.scaledown_policy]: SIGNAL_COMPLETE alarm state changed from ok to alarm (Transition to alarm due to 1 samples outside threshold, most recent: 2.73931707966) 2017-03-06 11:39:15Z [example.scaledown_policy]: SIGNAL_COMPLETE alarm state changed from alarm to alarm (Remaining as alarm due to 1 samples outside threshold, most recent: 2.78110858552)",
"openstack alarm-history show 022f707d-46cc-4d39-a0b2-afd2fc7ab86a +----------------------------+------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------+ | timestamp | type | detail | event_id | +----------------------------+------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------+ | 2017-03-06T11:32:35.510000 | state transition | {\"transition_reason\": \"Transition to ok due to 1 samples inside threshold, most recent: | 25e0e70b-3eda-466e-abac-42d9cf67e704 | | | | 2.73931707966\", \"state\": \"ok\"} | | | 2017-03-06T11:17:35.403000 | state transition | {\"transition_reason\": \"Transition to alarm due to 1 samples outside threshold, most recent: | 8322f62c-0d0a-4dc0-9279-435510f81039 | | | | 95.0964497325\", \"state\": \"alarm\"} | | | 2017-03-06T11:15:35.723000 | state transition | {\"transition_reason\": \"Transition to ok due to 1 samples inside threshold, most recent: | 1503bd81-7eba-474e-b74e-ded8a7b630a1 | | | | 3.59330523447\", \"state\": \"ok\"} | | | 2017-03-06T11:13:06.413000 | creation | {\"alarm_actions\": [\"trust+http://fca6e27e3d524ed68abdc0fd576aa848:[email protected]:8004/v1/fd | 224f15c0-b6f1-4690-9a22-0c1d236e65f6 | | | | 1c345135be4ee587fef424c241719d/stacks/example/d9ef59ed-b8f8-4e90-bd9b- | | | | | ae87e73ef6e2/resources/scaleup_policy/signal\"], \"user_id\": \"a85f83b7f7784025b6acdc06ef0a8fd8\", | | | | | \"name\": \"example-cpu_alarm_high-odj77qpbld7j\", \"state\": \"insufficient data\", \"timestamp\": | | | | | \"2017-03-06T11:13:06.413455\", \"description\": \"Scale up if CPU > 80%\", \"enabled\": true, | | | | | \"state_timestamp\": \"2017-03-06T11:13:06.413455\", \"rule\": {\"evaluation_periods\": 1, \"metric\": | | | | | \"cpu_util\", \"aggregation_method\": \"mean\", \"granularity\": 300, \"threshold\": 80.0, \"query\": \"{\\\"=\\\": | | | | | {\\\"server_group\\\": \\\"d9ef59ed-b8f8-4e90-bd9b-ae87e73ef6e2\\\"}}\", \"comparison_operator\": \"gt\", | | | | | \"resource_type\": \"instance\"}, \"alarm_id\": \"022f707d-46cc-4d39-a0b2-afd2fc7ab86a\", | | | | | \"time_constraints\": [], \"insufficient_data_actions\": null, \"repeat_actions\": true, \"ok_actions\": | | | | | null, \"project_id\": \"fd1c345135be4ee587fef424c241719d\", \"type\": | | | | | \"gnocchi_aggregation_by_resources_threshold\", \"severity\": \"low\"} | | +----------------------------+------------------+-----------------------------------------------------------------------------------------------------+-------------------------------------",
"awk '/Stack UPDATE started/,/Stack CREATE completed successfully/ {print USD0}' /var/log/containers/heat/heat-engine.log",
"grep -i alarm /var/log/containers/aodh/evaluator.log | grep -i transition",
"openstack stack show vnf -c id -c stack_status +--------------+--------------------------------------+ | Field | Value | +--------------+--------------------------------------+ | id | e0a15cee-34d1-418a-ac79-74ad07585730 | | stack_status | CREATE_COMPLETE | +--------------+--------------------------------------+",
"export STACK_ID=USD(openstack stack show vnf -c id -f value)",
"openstack metric aggregates --resource-type instance --sort-column timestamp --sort-descending '(/ (* (/ (/ (metric cpu rate:mean) 1000000000) 60) 100) 2)' server_group=\"USDSTACK_ID\" +----------------------------------------------------+---------------------------+-------------+--------------------+ | name | timestamp | granularity | value | +----------------------------------------------------+---------------------------+-------------+--------------------+ | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T21:03:00+00:00 | 60.0 | 3.158333333333333 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T21:02:00+00:00 | 60.0 | 2.6333333333333333 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T21:02:00+00:00 | 60.0 | 2.533333333333333 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T21:01:00+00:00 | 60.0 | 2.833333333333333 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T21:01:00+00:00 | 60.0 | 3.0833333333333335 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T21:00:00+00:00 | 60.0 | 13.450000000000001 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T21:00:00+00:00 | 60.0 | 2.45 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T21:00:00+00:00 | 60.0 | 2.6166666666666667 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T20:59:00+00:00 | 60.0 | 60.583333333333336 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:59:00+00:00 | 60.0 | 2.35 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T20:59:00+00:00 | 60.0 | 2.525 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T20:58:00+00:00 | 60.0 | 71.35833333333333 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:58:00+00:00 | 60.0 | 3.025 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T20:58:00+00:00 | 60.0 | 9.3 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T20:57:00+00:00 | 60.0 | 66.19166666666668 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:57:00+00:00 | 60.0 | 2.275 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T20:57:00+00:00 | 60.0 | 56.31666666666667 | | 61bfb555-9efb-46f1-8559-08dec90f94ed/cpu/rate:mean | 2022-11-07T20:56:00+00:00 | 60.0 | 59.50833333333333 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:56:00+00:00 | 60.0 | 2.375 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T20:56:00+00:00 | 60.0 | 63.949999999999996 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:55:00+00:00 | 60.0 | 15.558333333333335 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T20:55:00+00:00 | 60.0 | 93.85 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:54:00+00:00 | 60.0 | 59.54999999999999 | | 199b0cb9-6ed6-4410-9073-0fb2e7842b65/cpu/rate:mean | 2022-11-07T20:54:00+00:00 | 60.0 | 61.23333333333334 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:53:00+00:00 | 60.0 | 74.73333333333333 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:52:00+00:00 | 60.0 | 57.86666666666667 | | a95ab818-fbe8-4acd-9f7b-58e24ade6393/cpu/rate:mean | 2022-11-07T20:51:00+00:00 | 60.0 | 60.416666666666664 | +----------------------------------------------------+---------------------------+-------------+--------------------+",
"openstack stack show vnf -c id -c stack_status +--------------+--------------------------------------+ | Field | Value | +--------------+--------------------------------------+ | id | e0a15cee-34d1-418a-ac79-74ad07585730 | | stack_status | CREATE_COMPLETE | +--------------+--------------------------------------+",
"export STACK_ID=USD(openstack stack show vnf -c id -f value)",
"openstack server list --long --fit-width | grep \"metering.server_group='USDSTACK_ID'\" | bc1811de-48ed-44c1-ae22-c01f36d6cb02 | vn-xlfb4jb-yhbq6fkk2kec-qsu2lr47zigs-vnf-y27wuo25ce4e | ACTIVE | None | Running | private=192.168.100.139, 192.168.25.179 | fedora36 | d21f1aaa-0077-4313-8a46-266c39b705c1 | m1.small | 692533fe-0912-417e-b706-5d085449db53 | nova | standalone.localdomain | metering.server_group='e0a15cee-34d1-418a-ac79-74ad07585730' |",
"INSTANCE_NAME='vn-xlfb4jb-yhbq6fkk2kec-qsu2lr47zigs-vnf-y27wuo25ce4e' ; export INSTANCE_ID=USD(openstack server list --name USDINSTANCE_NAME -c ID -f value)",
"openstack metric resource show --column metrics USDINSTANCE_ID +---------+---------------------------------------------------------------------+ | Field | Value | +---------+---------------------------------------------------------------------+ | metrics | compute.instance.booting.time: 57ca241d-764b-4c58-aa32-35760d720b08 | | | cpu: d7767d7f-b10c-4124-8893-679b2e5d2ccd | | | disk.ephemeral.size: 038b11db-0598-4cfd-9f8d-4ba6b725375b | | | disk.root.size: 843f8998-e644-41f6-8635-e7c99e28859e | | | memory.usage: 1e554370-05ac-4107-98d8-9330265db750 | | | memory: fbd50c0e-90fa-4ad9-b0df-f7361ceb4e38 | | | vcpus: 0629743e-6baa-4e22-ae93-512dc16bac85 | +---------+---------------------------------------------------------------------+",
"openstack metric measures show --resource-id USDINSTANCE_ID --aggregation rate:mean cpu +---------------------------+-------------+---------------+ | timestamp | granularity | value | +---------------------------+-------------+---------------+ | 2022-11-08T14:12:00+00:00 | 60.0 | 71920000000.0 | | 2022-11-08T14:13:00+00:00 | 60.0 | 88920000000.0 | | 2022-11-08T14:14:00+00:00 | 60.0 | 76130000000.0 | | 2022-11-08T14:15:00+00:00 | 60.0 | 17640000000.0 | | 2022-11-08T14:16:00+00:00 | 60.0 | 3330000000.0 | | 2022-11-08T14:17:00+00:00 | 60.0 | 2450000000.0 |",
"openstack server show USDINSTANCE_ID -cflavor -f value m1.small (692533fe-0912-417e-b706-5d085449db53) openstack flavor show 692533fe-0912-417e-b706-5d085449db53 -c vcpus -f value 2",
"openstack metric aggregates --resource-type instance --sort-column timestamp --sort-descending '(/ (* (/ (/ (metric cpu rate:mean) 1000000000) 60) 100) 2)' id=USDINSTANCE_ID +----------------------------------------------------+---------------------------+-------------+--------------------+ | name | timestamp | granularity | value | +----------------------------------------------------+---------------------------+-------------+--------------------+ | bc1811de-48ed-44c1-ae22-c01f36d6cb02/cpu/rate:mean | 2022-11-08T14:26:00+00:00 | 60.0 | 2.45 | | bc1811de-48ed-44c1-ae22-c01f36d6cb02/cpu/rate:mean | 2022-11-08T14:25:00+00:00 | 60.0 | 11.075 | | bc1811de-48ed-44c1-ae22-c01f36d6cb02/cpu/rate:mean | 2022-11-08T14:24:00+00:00 | 60.0 | 61.3 | | bc1811de-48ed-44c1-ae22-c01f36d6cb02/cpu/rate:mean | 2022-11-08T14:23:00+00:00 | 60.0 | 74.78333333333332 | | bc1811de-48ed-44c1-ae22-c01f36d6cb02/cpu/rate:mean | 2022-11-08T14:22:00+00:00 | 60.0 | 55.383333333333326 |"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/autoscaling_for_instances/assembly_testing-and-troubleshooting-autoscaling_assembly_testing-and-troubleshooting-autoscaling |
Chapter 8. OpenShift Connector overview | Chapter 8. OpenShift Connector overview OpenShift Connector, also referred to as Visual Studio Code OpenShift Connector for Red Hat OpenShift, is a plug-in for CodeReady Workspaces that provides a method for interacting with Red Hat OpenShift 3 or 4 clusters. OpenShift Connector makes it possible to create, build, and debug applications in the CodeReady Workspaces IDE and then deploy the applications directly to a running OpenShift cluster. OpenShift Connector is a GUI for the OpenShift Do ( odo ) utility, which aggregates OpenShift CLI ( oc ) commands into compact units. As such, OpenShift Connector helps new developers who do not have OpenShift background with creating applications and running them on the cloud. Rather than using several oc commands, the user creates a new component or service by selecting a preconfigured template, such as a Project, an Application, or a Service, and then deploys it as an OpenShift Component to their cluster. This section provides information about installing, enabling, and basic use of the OpenShift Connector plug-in. Section 8.1, "Features of OpenShift Connector" Section 8.2, "Installing OpenShift Connector in CodeReady Workspaces" Section 8.3, "Authenticating with OpenShift Connector from CodeReady Workspaces when the OpenShift OAuth service does not authenticate the CodeReady Workspaces instance" Section 8.4, "Creating Components with OpenShift Connector in CodeReady Workspaces" Section 8.5, "Connecting source code from GitHub to an OpenShift Component using OpenShift Connector" 8.1. Features of OpenShift Connector The OpenShift Connector plug-in enables the user create, deploy, and push OpenShift Components to an OpenShift Cluster in a GUI. When used in CodeReady Workspaces, the OpenShift Connector GUI provides the following benefits to its users: Cluster management Logging in to clusters using: Authentication tokens Username and password Auto-login feature when CodeReady Workspaces is authenticated with the OpenShift OAuth service Switching contexts between different .kube/config entries directly from the extension view. Viewing and managing OpenShift resources as build and deployment. configurations from the Explorer view. Development Connecting to a local or hosted OpenShift cluster directly from CodeReady Workspaces. Quickly updating the cluster with your changes. Creating Components, Services, and Routes on the connected cluster. Adding storage directly to a component from the extension itself. Deployment Deploying to OpenShift clusters with a single click directly from CodeReady Workspaces. Navigating to the multiple Routes, created to access the deployed application. Deploying multiple interlinked Components and Services directly on the cluster. Pushing and watching component changes from the CodeReady Workspaces IDE. Streaming logs directly on the integrated terminal view of CodeReady Workspaces. Monitoring Working with OpenShift resources directly from the CodeReady Workspaces IDE. Starting and resuming build and deployment configurations. Viewing and following logs for deployments, Pods, and containers. 8.2. Installing OpenShift Connector in CodeReady Workspaces OpenShift Connector is a plug-in designed to create basic OpenShift Components, using CodeReady Workspaces as the editor, and to deploy the Component to an OpenShift cluster. To visually verify that the plug-in is available in your instance, see whether the OpenShift icon is displayed in the CodeReady Workspaces left menu. To install and enable OpenShift Connector in a CodeReady Workspaces instance, use instructions in this section. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#installing-che.adoc . Procedure Install OpenShift Connector in CodeReady Workspaces by adding it as an extension in the CodeReady Workspaces Plugins panel. Open the CodeReady Workspaces Plugins panel by pressing Ctrl + Shift + J or by navigating to View Plugins . Search for vscode-openshift-connector , and click the Install button. Restart the workspace for the changes to take effect. The dedicated OpenShift Application Explorer icon is added to the left panel. 8.3. Authenticating with OpenShift Connector from CodeReady Workspaces when the OpenShift OAuth service does not authenticate the CodeReady Workspaces instance This section describes how to authenticate with an OpenShift cluster when the OpenShift OAuth service does not authenticate the CodeReady Workspaces instance. It enables the user to develop and push Components from CodeReady Workspaces to the OpenShift instance that contains CodeReady Workspaces. Note When the OpenShift OAuth service authenticates the CodeReady Workspaces instance, the OpenShift Connector plug-in automatically establishes the authentication with the OpenShift instance containing CodeReady Workspaces. OpenShift Connector offers the following methods for logging in to the OpenShift Cluster from the CodeReady Workspaces instance: Using the notification asking to log in to the OpenShift instance containing CodeReady Workspaces. Using the Log in to the cluster button. Using the Command Palette. Note OpenShift Connector plug-in requires manual connecting to the target cluster. The OpenShift Connector plug-in logs in to the cluster as inClusterUser . If this user does not have manage project permission, this error message appears when creating a project using OpenShift Application Explorer: Failed to create Project with error 'Error: Command failed: "/tmp/vscode-unpacked/redhat.vscode-openshift -connector.latest.qvkozqtkba.openshift-connector-0.1.4-523.vsix/extension/out/tools/linux/odo" project create test-project [✗] projectrequests.project.openshift.io is forbidden To work around this issue: Log out from the local cluster. Log in to OpenShift cluster using the OpenShift user's credentials. Prerequisites A running instance of CodeReady Workspaces. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#installing-che.adoc . A CodeReady Workspaces workspace is available. See Chapter 3, Developer workspaces . The OpenShift Connector plug-in is available. See Section 8.2, "Installing OpenShift Connector in CodeReady Workspaces" . The OpenShift OAuth provider is available only for the auto-login to the OpenShift instance containing CodeReady Workspaces. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/administration_guide/index#configuring-openshift-oauth.adoc . Procedure In the left panel, click the OpenShift Application Explorer icon. In the OpenShift Connector panel, log in using the OpenShift Application Explorer. Use one of the following methods: Click the Log in to cluster button in the top left corner of the pane. Press F1 to open the Command Palette, or navigate to View > Find Command in the top menu. Search for OpenShift: Log in to cluster and press Enter . If a You are already logged in a cluster. message appears, click Yes . Select the method to log in to the cluster: Credentials or Token , and follow the login instructions. Note To authenticate with a token, the required token information is in the upper right corner of the main OpenShift Container Platform screen, under <User name> > Copy Login Command . 8.4. Creating Components with OpenShift Connector in CodeReady Workspaces In the context of OpenShift, Components and Services are basic structures that need to be stored in Application, which is a part of the OpenShift project that organizes deployable assets into virtual folders for better readability. This chapter describes how to create OpenShift Components in the CodeReady Workspaces using the OpenShift Connector plug-in and push them to an OpenShift cluster. Prerequisites A running instance of CodeReady Workspaces. To install an instance of CodeReady Workspaces, see https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#installing-che.adoc . The user is logged in to an OpenShift cluster using the OpenShift Connector plug-in. Procedure In the OpenShift Connector panel, right-click the row with the red OpenShift icon and select New Project . Enter a name for your project. Right-click the created project and select New Component . When prompted, enter the name for a new OpenShift Application in which the component can be stored. The following options of source for your component are displayed: Git Repository This prompts you to specify a Git repository URL and select the intended revision of the runtime. Binary File This prompts you to select a file from the file explorer. Workspace Directory This prompts you to select a folder from the file explorer. Enter the name for the component. Select the component type. Select the component type version. The component is created. Right-click the component, select New URL , and enter a name of your choice. The component is ready to be pushed to the OpenShift cluster. To do so, right-click the component and select Push . The component is deployed to the cluster. Use a right-click for selecting additional actions, such as debugging and opening in a browser, which requires the exposure of the port 8080 . 8.5. Connecting source code from GitHub to an OpenShift Component using OpenShift Connector When the user has a Git-stored source code that is wanted for further development, it is more efficient to deploy it directly from the Git repository into the OpenShift Connector Component. This chapter describes how to obtain the content from the Git repository and connect it with a CodeReady Workspaces-developed OpenShift Component. Prerequisites Have a running CodeReady Workspaces workspace. Be logged in to the OpenShift cluster using the OpenShift Connector. Procedure To make changes to your GitHub component, clone the repository into CodeReady Workspaces to obtain this source code: In the CodeReady Workspaces main screen, open the Command Palette by pressing F1 . Type the Git Clone command in the Command Palette and press Enter . Provide the GitHub URL and select the destination for the deployment. Add source-code files to your Project using the Add to workspace button. For additional information about cloning Git repository, see: Section 2.2.2, "Accessing a Git repository using HTTPS" . | null | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/end-user_guide/openshift-connector-overview_crw |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_rhel_resource_optimization_with_insights_for_red_hat_enterprise_linux_with_fedramp/proc-providing-feedback-on-redhat-documentation |
2.2. Host Security Recommended Practices for Red Hat Enterprise Linux | 2.2. Host Security Recommended Practices for Red Hat Enterprise Linux With host security being such a critical part of a secure virtualization infrastructure, the following recommended practices should serve as a starting point for securing a Red Hat Enterprise Linux host system: Run only the services necessary to support the use and management of your guest systems. If you need to provide additional services, such as file or print services, you should consider running those services on a Red Hat Enterprise Linux guest. Limit direct access to the system to only those users who have a need to manage the system. Consider disallowing shared root access and instead use tools such as sudo to grant privileged access to administrators based on their administrative roles. Ensure that SELinux is configured properly for your installation and is operating in enforcing mode. Besides being a good security practice, the advanced virtualization security functionality provided by sVirt relies on SELinux. Refer to Chapter 4, sVirt for more information on SELinux and sVirt. Ensure that auditing is enabled on the host system and that libvirt is configured to emit audit records. When auditing is enabled, libvirt will generate audit records for changes to guest configuration as well start/stop events which help you track the guest's state. In addition to the standard audit log inspection tools, the libvirt audit events can also be viewed using the specialized auvirt tool. Ensure that any remote management of the system takes place only over secured network channels. Tools such as SSH and network protocols such as TLS or SSL provide both authentication and data encryption to help ensure that only approved administrators can manage the system remotely. Ensure that the firewall is configured properly for your installation and is activated at boot. Only those network ports needed for the use and management of the system should be allowed. Refrain from granting guests direct access to entire disks or block devices (for example, /dev/sdb ); instead, use partitions (for example, /dev/sdb1 ) or LVM volumes for guest storage. Ensure that staff have adequate training and knowledge in virtual environments. Warning Attaching a USB device, Physical Function or physical device when SR-IOV is not available to a virtual machine could provide access to the device which is sufficient enough to overwrite that device's firmware. This presents a potential security issue by which an attacker could overwrite the device's firmware with malicious code and cause problems when moving the device between virtual machines or at host boot time. It is advised to use SR-IOV Virtual Function device assignment where applicable. Note The objective of this guide is to explain the unique security-related challenges, vulnerabilities, and solutions that are present in most virtualized environments, and the recommended method of addressing them. However, there are a number of recommended practices to follow when securing a Red Hat Enterprise Linux system that apply regardless of whether the system is a standalone, virtualization host, or guest instance. These recommended practices include procedures such as system updates, password security, encryption, and firewall configuration. This information is discussed in more detail in the Red Hat Enterprise Linux Security Guide . 2.2.1. Special Considerations for Public Cloud Operators Public cloud service providers are exposed to a number of security risks beyond that of the traditional virtualization user. Virtual guest isolation, both between the host and guest as well as between guests, is critical due to the threat of malicious guests and the requirements on customer data confidentiality and integrity across the virtualization infrastructure. In addition to the Red Hat Enterprise Linux virtualization recommended practices previously listed, public cloud operators should also consider the following items: Disallow any direct hardware access from the guest. PCI, USB, FireWire, Thunderbolt, eSATA and other device passthrough mechanisms not only make management difficult, but often rely on the underlying hardware to enforce separation between the guests. Isolate the cloud operator's private management network from the customer guest network, and customer networks from one another, so that: the guests cannot access the host systems over the network. one customer cannot access another customer's guest systems directly via the cloud provider's internal network. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_security_guide/sect-virtualization_security_guide-host_security-host_security_recommended_practices_for_red_hat_enterprise_linux |
9.10. Balloon Driver | 9.10. Balloon Driver The balloon driver allows guests to express to the hypervisor how much memory they require. The balloon driver allows the host to efficiently allocate and memory to the guest and allow free memory to be allocated to other guests and processes. Guests using the balloon driver can mark sections of the guest's RAM as not in use (balloon inflation). The hypervisor can free the memory and use the memory for other host processes or other guests on that host. When the guest requires the freed memory again, the hypervisor can reallocate RAM to the guest (balloon deflation). | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/balloon_driver |
Chapter 1. Planning Red Hat Gluster Storage Installation | Chapter 1. Planning Red Hat Gluster Storage Installation This chapter outlines the minimum hardware and software installation requirements for a successful installation, configuration, and operation of a Red Hat Gluster Storage Server environment. 1.1. About Red Hat Gluster Storage Red Hat Gluster Storage is a software-only, scale-out storage that provides flexible and affordable unstructured data storage for the enterprise. Red Hat Gluster Storage 3.5 provides new opportunities to unify data storage and infrastructure, increase performance, and improve availability and manageability in order to meet a broader set of an organization's storage challenges and requirements. GlusterFS, a key building block of Red Hat Gluster Storage, is based on a stackable user space design and can deliver exceptional performance for diverse workloads. GlusterFS aggregates various storage servers over network interconnects into one large parallel network file system. The POSIX compatible GlusterFS servers, which use XFS file system format to store data on disks, can be accessed using industry standard access protocols including NFS and CIFS. Red Hat Gluster Storage can be deployed in the private cloud or datacenter using Red Hat Gluster Storage Server for On-Premise. Red Hat Gluster Storage can be installed on commodity servers or virtual machines and storage hardware resulting in a powerful, massively scalable, and highly available NAS environment. Additionally, Red Hat Gluster Storage can be deployed in the public cloud using Red Hat Gluster Storage Server for Public Cloud, for example, within the Amazon Web Services (AWS) cloud. It delivers all the features and functionality possible in a private cloud or datacenter to the public cloud by providing massively scalable and highly available NAS in the cloud. Red Hat Gluster Storage Server for On-Premise Red Hat Gluster Storage Server for On-Premise enables enterprises to treat physical storage as a virtualized, scalable, and centrally managed pool of storage by using commodity server and storage hardware. Red Hat Gluster Storage Server for Public Cloud Red Hat Gluster Storage Server for Public Cloud packages GlusterFS as an Amazon Machine Image (AMI) for deploying scalable NAS in the AWS public cloud. This powerful storage server provides a highly available, scalable, virtualized, and centrally managed pool of storage for Amazon users. Storage node A storage node can be a physical server, a virtual machine, or a public cloud machine image running one instance of Red Hat Gluster Storage. | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/chap-planning_red_hat_storage_installation |
Chapter 15. Uninstalling a cluster on GCP | Chapter 15. Uninstalling a cluster on GCP You can remove a cluster that you deployed to Google Cloud Platform (GCP). 15.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. For example, some Google Cloud resources require IAM permissions in shared VPC host projects, or there might be unused health checks that must be deleted . Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 15.2. Deleting Google Cloud Platform resources with the Cloud Credential Operator utility After uninstalling an OpenShift Container Platform cluster that uses short-term credentials managed outside the cluster, you can use the CCO utility ( ccoctl ) to remove the Google Cloud Platform (GCP) resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Uninstall an OpenShift Container Platform cluster on GCP that uses short-term credentials. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --to=<path_to_directory_for_credentials_requests> 2 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Delete the GCP resources that ccoctl created by running the following command: USD ccoctl gcp delete \ --name=<name> \ 1 --project=<gcp_project_id> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ --force-delete-custom-roles 3 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <gcp_project_id> is the GCP project ID in which to delete cloud resources. 3 Optional: This parameter deletes the custom roles that the ccoctl utility creates during installation. GCP does not permanently delete custom roles immediately. For more information, see GCP documentation about deleting a custom role . Verification To verify that the resources are deleted, query GCP. For more information, refer to GCP documentation. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --to=<path_to_directory_for_credentials_requests> 2",
"ccoctl gcp delete --name=<name> \\ 1 --project=<gcp_project_id> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> --force-delete-custom-roles 3"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_gcp/uninstalling-cluster-gcp |
4.5.6. Logging Configuration | 4.5.6. Logging Configuration Clicking on the Logging tab displays the Logging Configuration page, which provides an interface for configuring logging settings. You can configure the following settings for global logging configuration: Checking Log Debugging Messages enables debugging messages in the log file. Checking Log Messages to Syslog enables messages to syslog . You can select the Syslog Message Facility and the Syslog Message Priority . The Syslog Message Priority setting indicates that messages at the selected level and higher are sent to syslog . Checking Log Messages to Log File enables messages to the log file. You can specify the Log File Path name. The logfile message priority setting indicates that messages at the selected level and higher are written to the log file. You can override the global logging settings for specific daemons by selecting one of the daemons listed beneath the Daemon-specific Logging Overrides heading at the bottom of the Logging Configuration page. After selecting the daemon, you can check whether to log the debugging messages for that particular daemon. You can also specify the syslog and log file settings for that daemon. Click Apply for the logging configuration changes you have specified to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-logging-conga-CA |
Chapter 10. Subscription [operators.coreos.com/v1alpha1] | Chapter 10. Subscription [operators.coreos.com/v1alpha1] Description Subscription keeps operators up to date by tracking changes to Catalogs. Type object Required metadata spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SubscriptionSpec defines an Application that can be installed status object 10.1.1. .spec Description SubscriptionSpec defines an Application that can be installed Type object Required name source sourceNamespace Property Type Description channel string config object SubscriptionConfig contains configuration specified for a subscription. installPlanApproval string Approval is the user approval policy for an InstallPlan. It must be one of "Automatic" or "Manual". name string source string sourceNamespace string startingCSV string 10.1.2. .spec.config Description SubscriptionConfig contains configuration specified for a subscription. Type object Property Type Description affinity object If specified, overrides the pod's scheduling constraints. nil sub-attributes will not override the original values in the pod.spec for those sub-attributes. Use empty object ({}) to erase original sub-attribute values. annotations object (string) Annotations is an unstructured key value map stored with each Deployment, Pod, APIService in the Operator. Typically, annotations may be set by external tools to store and retrieve arbitrary metadata. Use this field to pre-define annotations that OLM should add to each of the Subscription's deployments, pods, and apiservices. env array Env is a list of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array EnvFrom is a list of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Immutable. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ resources object Resources represents compute resources required by this container. Immutable. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ selector object Selector is the label selector for pods to be configured. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. tolerations array Tolerations are the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. volumeMounts array List of VolumeMounts to set in the container. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array List of Volumes to set in the podSpec. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 10.1.3. .spec.config.affinity Description If specified, overrides the pod's scheduling constraints. nil sub-attributes will not override the original values in the pod.spec for those sub-attributes. Use empty object ({}) to erase original sub-attribute values. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 10.1.4. .spec.config.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 10.1.5. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 10.1.6. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 10.1.7. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 10.1.8. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 10.1.9. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.10. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 10.1.11. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.12. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 10.1.13. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 10.1.14. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 10.1.15. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 10.1.16. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.17. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 10.1.18. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.19. .spec.config.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 10.1.20. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 10.1.21. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 10.1.22. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.23. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.24. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.25. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.26. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.27. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.28. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.29. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 10.1.30. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.31. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.32. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.33. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.34. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.35. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.36. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.37. .spec.config.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 10.1.38. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 10.1.39. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 10.1.40. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.41. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.42. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.43. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.44. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.45. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.46. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.47. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 10.1.48. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.49. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.50. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.51. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.52. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.53. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.54. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.55. .spec.config.env Description Env is a list of environment variables to set in the container. Cannot be updated. Type array 10.1.56. .spec.config.env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 10.1.57. .spec.config.env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 10.1.58. .spec.config.env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 10.1.59. .spec.config.env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 10.1.60. .spec.config.env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 10.1.61. .spec.config.env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 10.1.62. .spec.config.envFrom Description EnvFrom is a list of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Immutable. Type array 10.1.63. .spec.config.envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 10.1.64. .spec.config.envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 10.1.65. .spec.config.envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 10.1.66. .spec.config.resources Description Resources represents compute resources required by this container. Immutable. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 10.1.67. .spec.config.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 10.1.68. .spec.config.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 10.1.69. .spec.config.selector Description Selector is the label selector for pods to be configured. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.70. .spec.config.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.71. .spec.config.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.72. .spec.config.tolerations Description Tolerations are the pod's tolerations. Type array 10.1.73. .spec.config.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 10.1.74. .spec.config.volumeMounts Description List of VolumeMounts to set in the container. Type array 10.1.75. .spec.config.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 10.1.76. .spec.config.volumes Description List of Volumes to set in the podSpec. Type array 10.1.77. .spec.config.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 10.1.78. .spec.config.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 10.1.79. .spec.config.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 10.1.80. .spec.config.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 10.1.81. .spec.config.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 10.1.82. .spec.config.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.83. .spec.config.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 10.1.84. .spec.config.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.85. .spec.config.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 10.1.86. .spec.config.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.87. .spec.config.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.88. .spec.config.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 10.1.89. .spec.config.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.90. .spec.config.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 10.1.91. .spec.config.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 10.1.92. .spec.config.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 10.1.93. .spec.config.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 10.1.94. .spec.config.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 10.1.95. .spec.config.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 10.1.96. .spec.config.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 10.1.97. .spec.config.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 10.1.98. .spec.config.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 10.1.99. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 10.1.100. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 10.1.101. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 10.1.102. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 10.1.103. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.104. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.105. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.106. .spec.config.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 10.1.107. .spec.config.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 10.1.108. .spec.config.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.109. .spec.config.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 10.1.110. .spec.config.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 10.1.111. .spec.config.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 10.1.112. .spec.config.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 10.1.113. .spec.config.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 10.1.114. .spec.config.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 10.1.115. .spec.config.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.116. .spec.config.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 10.1.117. .spec.config.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 10.1.118. .spec.config.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 10.1.119. .spec.config.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 10.1.120. .spec.config.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 10.1.121. .spec.config.volumes[].projected.sources Description sources is the list of volume projections Type array 10.1.122. .spec.config.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description clusterTrustBundle object ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 10.1.123. .spec.config.volumes[].projected.sources[].clusterTrustBundle Description ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. Type object Required path Property Type Description labelSelector object Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". name string Select a single ClusterTrustBundle by object name. Mutually-exclusive with signerName and labelSelector. optional boolean If true, don't block pod startup if the referenced ClusterTrustBundle(s) aren't available. If using name, then the named ClusterTrustBundle is allowed not to exist. If using signerName, then the combination of signerName and labelSelector is allowed to match zero ClusterTrustBundles. path string Relative path from the volume root to write the bundle. signerName string Select all ClusterTrustBundles that match this signer name. Mutually-exclusive with name. The contents of all selected ClusterTrustBundles will be unified and deduplicated. 10.1.124. .spec.config.volumes[].projected.sources[].clusterTrustBundle.labelSelector Description Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.125. .spec.config.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.126. .spec.config.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.127. .spec.config.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 10.1.128. .spec.config.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.129. .spec.config.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.130. .spec.config.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 10.1.131. .spec.config.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 10.1.132. .spec.config.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 10.1.133. .spec.config.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 10.1.134. .spec.config.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 10.1.135. .spec.config.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 10.1.136. .spec.config.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.137. .spec.config.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.138. .spec.config.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 10.1.139. .spec.config.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 10.1.140. .spec.config.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 10.1.141. .spec.config.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.142. .spec.config.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 10.1.143. .spec.config.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.144. .spec.config.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 10.1.145. .spec.config.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.146. .spec.config.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.147. .spec.config.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 10.1.148. .spec.config.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.149. .spec.config.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 10.1.150. .status Description Type object Required lastUpdated Property Type Description catalogHealth array CatalogHealth contains the Subscription's view of its relevant CatalogSources' status. It is used to determine SubscriptionStatusConditions related to CatalogSources. catalogHealth[] object SubscriptionCatalogHealth describes the health of a CatalogSource the Subscription knows about. conditions array Conditions is a list of the latest available observations about a Subscription's current state. conditions[] object SubscriptionCondition represents the latest available observations of a Subscription's state. currentCSV string CurrentCSV is the CSV the Subscription is progressing to. installPlanGeneration integer InstallPlanGeneration is the current generation of the installplan installPlanRef object InstallPlanRef is a reference to the latest InstallPlan that contains the Subscription's current CSV. installedCSV string InstalledCSV is the CSV currently installed by the Subscription. installplan object Install is a reference to the latest InstallPlan generated for the Subscription. DEPRECATED: InstallPlanRef lastUpdated string LastUpdated represents the last time that the Subscription status was updated. reason string Reason is the reason the Subscription was transitioned to its current state. state string State represents the current state of the Subscription 10.1.151. .status.catalogHealth Description CatalogHealth contains the Subscription's view of its relevant CatalogSources' status. It is used to determine SubscriptionStatusConditions related to CatalogSources. Type array 10.1.152. .status.catalogHealth[] Description SubscriptionCatalogHealth describes the health of a CatalogSource the Subscription knows about. Type object Required catalogSourceRef healthy lastUpdated Property Type Description catalogSourceRef object CatalogSourceRef is a reference to a CatalogSource. healthy boolean Healthy is true if the CatalogSource is healthy; false otherwise. lastUpdated string LastUpdated represents the last time that the CatalogSourceHealth changed 10.1.153. .status.catalogHealth[].catalogSourceRef Description CatalogSourceRef is a reference to a CatalogSource. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.1.154. .status.conditions Description Conditions is a list of the latest available observations about a Subscription's current state. Type array 10.1.155. .status.conditions[] Description SubscriptionCondition represents the latest available observations of a Subscription's state. Type object Required status type Property Type Description lastHeartbeatTime string LastHeartbeatTime is the last time we got an update on a given condition lastTransitionTime string LastTransitionTime is the last time the condition transit from one status to another message string Message is a human-readable message indicating details about last transition. reason string Reason is a one-word CamelCase reason for the condition's last transition. status string Status is the status of the condition, one of True, False, Unknown. type string Type is the type of Subscription condition. 10.1.156. .status.installPlanRef Description InstallPlanRef is a reference to the latest InstallPlan that contains the Subscription's current CSV. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.1.157. .status.installplan Description Install is a reference to the latest InstallPlan generated for the Subscription. DEPRECATED: InstallPlanRef Type object Required apiVersion kind name uuid Property Type Description apiVersion string kind string name string uuid string UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. 10.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/subscriptions GET : list objects of kind Subscription /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions DELETE : delete collection of Subscription GET : list objects of kind Subscription POST : create a Subscription /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name} DELETE : delete a Subscription GET : read the specified Subscription PATCH : partially update the specified Subscription PUT : replace the specified Subscription /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name}/status GET : read status of the specified Subscription PATCH : partially update status of the specified Subscription PUT : replace status of the specified Subscription 10.2.1. /apis/operators.coreos.com/v1alpha1/subscriptions HTTP method GET Description list objects of kind Subscription Table 10.1. HTTP responses HTTP code Reponse body 200 - OK SubscriptionList schema 401 - Unauthorized Empty 10.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions HTTP method DELETE Description delete collection of Subscription Table 10.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Subscription Table 10.3. HTTP responses HTTP code Reponse body 200 - OK SubscriptionList schema 401 - Unauthorized Empty HTTP method POST Description create a Subscription Table 10.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.5. Body parameters Parameter Type Description body Subscription schema Table 10.6. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 201 - Created Subscription schema 202 - Accepted Subscription schema 401 - Unauthorized Empty 10.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name} Table 10.7. Global path parameters Parameter Type Description name string name of the Subscription HTTP method DELETE Description delete a Subscription Table 10.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Subscription Table 10.10. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Subscription Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.12. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Subscription Table 10.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.14. Body parameters Parameter Type Description body Subscription schema Table 10.15. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 201 - Created Subscription schema 401 - Unauthorized Empty 10.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name}/status Table 10.16. Global path parameters Parameter Type Description name string name of the Subscription HTTP method GET Description read status of the specified Subscription Table 10.17. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Subscription Table 10.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.19. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Subscription Table 10.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.21. Body parameters Parameter Type Description body Subscription schema Table 10.22. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 201 - Created Subscription schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operatorhub_apis/subscription-operators-coreos-com-v1alpha1 |
Managing Red Hat Gluster Storage using RHV Administration Portal | Managing Red Hat Gluster Storage using RHV Administration Portal Red Hat Hyperconverged Infrastructure for Virtualization 1.8 Perform common Red Hat Gluster Storage management tasks in the Administration Portal Laura Bailey [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/managing_red_hat_gluster_storage_using_rhv_administration_portal/index |
Chapter 17. KubeAPIServer [operator.openshift.io/v1] | Chapter 17. KubeAPIServer [operator.openshift.io/v1] Description KubeAPIServer provides information to configure an operator to manage kube-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Kubernetes API Server status object status is the most recently observed status of the Kubernetes API Server 17.1.1. .spec Description spec is the specification of the desired behavior of the Kubernetes API Server Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 17.1.2. .status Description status is the most recently observed status of the Kubernetes API Server Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state serviceAccountIssuers array serviceAccountIssuers tracks history of used service account issuers. The item without expiration time represents the currently used service account issuer. The other items represents service account issuers that were used previously and are still being trusted. The default expiration for the items is set by the platform and it defaults to 24h. see: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection serviceAccountIssuers[] object version string version is the level this availability applies to 17.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 17.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 17.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 17.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 17.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 17.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 17.1.9. .status.serviceAccountIssuers Description serviceAccountIssuers tracks history of used service account issuers. The item without expiration time represents the currently used service account issuer. The other items represents service account issuers that were used previously and are still being trusted. The default expiration for the items is set by the platform and it defaults to 24h. see: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection Type array 17.1.10. .status.serviceAccountIssuers[] Description Type object Property Type Description expirationTime string expirationTime is the time after which this service account issuer will be pruned and removed from the trusted list of service account issuers. name string name is the name of the service account issuer --- 17.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/kubeapiservers DELETE : delete collection of KubeAPIServer GET : list objects of kind KubeAPIServer POST : create a KubeAPIServer /apis/operator.openshift.io/v1/kubeapiservers/{name} DELETE : delete a KubeAPIServer GET : read the specified KubeAPIServer PATCH : partially update the specified KubeAPIServer PUT : replace the specified KubeAPIServer /apis/operator.openshift.io/v1/kubeapiservers/{name}/status GET : read status of the specified KubeAPIServer PATCH : partially update status of the specified KubeAPIServer PUT : replace status of the specified KubeAPIServer 17.2.1. /apis/operator.openshift.io/v1/kubeapiservers HTTP method DELETE Description delete collection of KubeAPIServer Table 17.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeAPIServer Table 17.2. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServerList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeAPIServer Table 17.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.4. Body parameters Parameter Type Description body KubeAPIServer schema Table 17.5. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 201 - Created KubeAPIServer schema 202 - Accepted KubeAPIServer schema 401 - Unauthorized Empty 17.2.2. /apis/operator.openshift.io/v1/kubeapiservers/{name} Table 17.6. Global path parameters Parameter Type Description name string name of the KubeAPIServer HTTP method DELETE Description delete a KubeAPIServer Table 17.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 17.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeAPIServer Table 17.9. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeAPIServer Table 17.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.11. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeAPIServer Table 17.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.13. Body parameters Parameter Type Description body KubeAPIServer schema Table 17.14. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 201 - Created KubeAPIServer schema 401 - Unauthorized Empty 17.2.3. /apis/operator.openshift.io/v1/kubeapiservers/{name}/status Table 17.15. Global path parameters Parameter Type Description name string name of the KubeAPIServer HTTP method GET Description read status of the specified KubeAPIServer Table 17.16. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeAPIServer Table 17.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.18. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeAPIServer Table 17.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.20. Body parameters Parameter Type Description body KubeAPIServer schema Table 17.21. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 201 - Created KubeAPIServer schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operator_apis/kubeapiserver-operator-openshift-io-v1 |
Appendix A. OCF Return Codes | Appendix A. OCF Return Codes This appendix describes the OCF return codes and how they are interpreted by Pacemaker. The first thing the cluster does when an agent returns a code is to check the return code against the expected result. If the result does not match the expected value, then the operation is considered to have failed, and recovery action is initiated. For any invocation, resource agents must exit with a defined return code that informs the caller of the outcome of the invoked action. There are three types of failure recovery, as described in Table A.1, "Types of Recovery Performed by the Cluster" . Table A.1. Types of Recovery Performed by the Cluster Type Description Action Taken by the Cluster soft A transient error occurred. Restart the resource or move it to a new location . hard A non-transient error that may be specific to the current node occurred. Move the resource elsewhere and prevent it from being retried on the current node. fatal A non-transient error that will be common to all cluster nodes occurred (for example, a bad configuration was specified). Stop the resource and prevent it from being started on any cluster node. Table A.2, "OCF Return Codes" provides The OCF return codes and the type of recovery the cluster will initiate when a failure code is received. Note that even actions that return 0 (OCF alias OCF_SUCCESS ) can be considered to have failed, if 0 was not the expected return value. Table A.2. OCF Return Codes Return Code OCF Label Description 0 OCF_SUCCESS The action completed successfully. This is the expected return code for any successful start, stop, promote, and demote command. Type if unexpected: soft 1 OCF_ERR_GENERIC The action returned a generic error. Type: soft The resource manager will attempt to recover the resource or move it to a new location. 2 OCF_ERR_ARGS The resource's configuration is not valid on this machine. For example, it refers to a location not found on the node. Type: hard The resource manager will move the resource elsewhere and prevent it from being retried on the current node 3 OCF_ERR_UNIMPLEMENTED The requested action is not implemented. Type: hard 4 OCF_ERR_PERM The resource agent does not have sufficient privileges to complete the task. This may be due, for example, to the agent not being able to open a certain file, to listen on a specific socket, or to write to a directory. Type: hard Unless specifically configured otherwise, the resource manager will attempt to recover a resource which failed with this error by restarting the resource on a different node (where the permission problem may not exist). 5 OCF_ERR_INSTALLED A required component is missing on the node where the action was executed. This may be due to a required binary not being executable, or a vital configuration file being unreadable. Type: hard Unless specifically configured otherwise, the resource manager will attempt to recover a resource which failed with this error by restarting the resource on a different node (where the required files or binaries may be present). 6 OCF_ERR_CONFIGURED The resource's configuration on the local node is invalid. Type: fatal When this code is returned, Pacemaker will prevent the resource from running on any node in the cluster, even if the service configuraiton is valid on some other node. 7 OCF_NOT_RUNNING The resource is safely stopped. This implies that the resource has either gracefully shut down, or has never been started. Type if unexpected: soft The cluster will not attempt to stop a resource that returns this for any action. 8 OCF_RUNNING_MASTER The resource is running in master mode. Type if unexpected: soft 9 OCF_FAILED_MASTER The resource is in master mode but has failed. Type: soft The resource will be demoted, stopped and then started (and possibly promoted) again. other N/A Custom error code. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ap-ha-ocfcodes-HARR |
Chapter 7. Installing a private cluster on IBM Cloud | Chapter 7. Installing a private cluster on IBM Cloud In OpenShift Container Platform version 4.14, you can install a private cluster into an existing VPC. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud(R) . 7.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Create a DNS zone using IBM Cloud(R) DNS Services and specify it as the base domain of the cluster. For more information, see "Using IBM Cloud(R) DNS Services to configure DNS resolution". Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 7.3. Private clusters in IBM Cloud To create a private cluster on IBM Cloud(R), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the IBM Cloud(R) APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public ingress A public DNS zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private DNS zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 7.3.1. Limitations Private clusters on IBM Cloud(R) are subject only to the limitations associated with the existing VPC that was used for cluster deployment. 7.4. About using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). Deploying OpenShift Container Platform into an existing VPC can help you avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 7.4.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 7.4.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The subnets for control plane machines and compute machines To ensure that the subnets that you provide are suitable, the installation program confirms the following: All of the subnets that you specify exist. For each availability zone in the region, you specify: One subnet for control plane machines. One subnet for compute machines. The machine CIDR that you specified contains the subnets for the compute machines and control plane machines. Note Subnet IDs are not supported. 7.4.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP port 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a bastion host on your cloud network or a machine that has access to the to the network through a VPN. For more information about private cluster installation requirements, see "Private clusters". Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux (RHEL) 8, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 7.8. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 7.9. Manually creating the installation configuration file When installing a private OpenShift Container Platform cluster, you must manually generate the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Cloud(R) 7.9.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.9.2. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 7.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 mx2-8x64 mx2d-4x32 mx3d-2x20 ox2-4x32 ox2-8x64 ux2d-2x56 vx2d-4x56 7.9.3. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 12 resourceGroupName: eu-gb-example-network-rg 13 networkResourceGroupName: eu-gb-example-existing-network-rg 14 vpcName: eu-gb-example-network-1 15 controlPlaneSubnets: 16 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 17 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: Internal 18 pullSecret: '{"auths": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 1 8 12 19 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 13 The name of an existing resource group. All installer-provisioned cluster resources are deployed to this resource group. If undefined, a new resource group is created for the cluster. 14 Specify the name of the resource group that contains the existing virtual private cloud (VPC). The existing VPC and subnets should be in this resource group. The cluster will be installed to this VPC. 15 Specify the name of an existing VPC. 16 Specify the name of the existing subnets to which to deploy the control plane machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 17 Specify the name of the existing subnets to which to deploy the compute machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 18 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster. The default value is External . 20 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.9.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.10. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 7.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 7.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IC_API_KEY=<api_key>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 12 resourceGroupName: eu-gb-example-network-rg 13 networkResourceGroupName: eu-gb-example-existing-network-rg 14 vpcName: eu-gb-example-network-1 15 controlPlaneSubnets: 16 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 17 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: Internal 18 pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_cloud/installing-ibm-cloud-private |
Chapter 15. Configuring API tokens | Chapter 15. Configuring API tokens Red Hat Advanced Cluster Security for Kubernetes (RHACS) requires API tokens for some system integrations, authentication processes, and system functions. You can configure tokens using the RHACS web interface. Note To prevent privilege escalation, when you create a new token, your role's permissions limit the permission you can assign to that token. For example, if you only have read permission for the Integration resource, you cannot create a token with write permission. If you want a custom role to create tokens for other users to use, you must assign the required permissions to that custom role. Use short-lived tokens for machine-to-machine communication, such as CI/CD pipelines, scripts, and other automation. Also, use the roxctl central login command for human-to-machine communication, such as roxctl CLI or API access. The majority of cloud service providers support OIDC identity tokens, for example, Microsoft Entra ID, Google Cloud Identity Platform, and AWS Cognito. OIDC identity tokens issued by these services can be used for RHACS short-lived access. 15.1. Creating an API token Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll to the Authentication Tokens category, and then click API Token . Click Generate Token . Enter a name for the token and select a role that provides the required level of access (for example, Continuous Integration or Sensor Creator ). Click Generate . Important Copy the generated token and securely store it. You will not be able to view it again. Additional resources Using an authentication provider to authenticate with roxctl Configuring short-lived access Using Azure Entra ID service principals for machine to machine auth with RHACS 15.2. About API token expiration API tokens expire one year from the creation date. RHACS alerts you in the web interface and by sending log messages to Central when a token will expire in less than one week. The log message process runs once an hour. Once a day, the process lists the tokens that are expiring and creates a log message for each one. Log messages are issued once a day and appear in Central logs. Logs have the format as shown in the following example: Warn: API Token [token name] (ID [token ID]) will expire in less than X days. You can change the default settings for the log message process by configuring the environment variables shown in the following table: Environment variable Default value Description ROX_TOKEN_EXPIRATION_NOTIFIER_INTERVAL 1h (1 hour) The frequency at which the log message background loop that lists tokens and creates the logs will run. ROX_TOKEN_EXPIRATION_NOTIFIER_BACKOFF_INTERVAL 24h (1 day) The frequency at which the loop lists tokens and issues notifications. ROX_TOKEN_EXPIRATION_DETECTION_WINDOW 168h (1 week) The time period before expiration of the token that will cause the notification to be generated. | [
"Warn: API Token [token name] (ID [token ID]) will expire in less than X days."
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/configuring/configure-api-token |
Chapter 95. MongoDB | Chapter 95. MongoDB Both producer and consumer are supported According to Wikipedia: "NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases and ACID guarantees." NoSQL solutions have grown in popularity in the last few years, and major extremely-used sites and services such as Facebook, LinkedIn, Twitter, etc. are known to use them extensively to achieve scalability and agility. Basically, NoSQL solutions differ from traditional RDBMS (Relational Database Management Systems) in that they don't use SQL as their query language and generally don't offer ACID-like transactional behaviour nor relational data. Instead, they are designed around the concept of flexible data structures and schemas (meaning that the traditional concept of a database table with a fixed schema is dropped), extreme scalability on commodity hardware and blazing-fast processing. MongoDB is a very popular NoSQL solution and the camel-mongodb component integrates Camel with MongoDB allowing you to interact with MongoDB collections both as a producer (performing operations on the collection) and as a consumer (consuming documents from a MongoDB collection). MongoDB revolves around the concepts of documents (not as is office documents, but rather hierarchical data defined in JSON/BSON) and collections. This component page will assume you are familiar with them. Otherwise, visit http://www.mongodb.org/ . Note The MongoDB Camel component uses Mongo Java Driver 4.x. 95.1. Dependencies When using mongodb with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mongodb-starter</artifactId> </dependency> 95.2. URI format 95.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 95.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 95.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 95.4. Component Options The MongoDB component supports 4 options, which are listed below. Name Description Default Type mongoConnection (common) Autowired Shared client used for connection. All endpoints generated from the component will share this connection client. MongoClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 95.5. Endpoint Options The MongoDB endpoint is configured using URI syntax: with the following path and query parameters: 95.5.1. Path Parameters (1 parameters) Name Description Default Type connectionBean (common) Required Sets the connection bean reference used to lookup a client for connecting to a database. String 95.5.2. Query Parameters (27 parameters) Name Description Default Type collection (common) Sets the name of the MongoDB collection to bind to this endpoint. String collectionIndex (common) Sets the collection index (JSON FORMAT : \\{ field1 : order1, field2 : order2}). String createCollection (common) Create collection during initialisation if it doesn't exist. Default is true. true boolean database (common) Sets the name of the MongoDB database to target. String hosts (common) Host address of mongodb server in host:port format. It's possible also use more than one address, as comma separated list of hosts: host1:port1,host2:port2. If hosts parameter is specified, provided connectionBean is ignored. String mongoConnection (common) Sets the connection bean used as a client for connecting to a database. MongoClient operation (common) Sets the operation this endpoint will execute against MongoDB. Enum values: findById findOneByQuery findAll findDistinct insert save update remove bulkWrite aggregate getDbStats getColStats count command MongoDbOperation outputType (common) Convert the output of the producer to the selected type : DocumentList Document or MongoIterable. DocumentList or MongoIterable applies to findAll and aggregate. Document applies to all other operations. Enum values: DocumentList Document MongoIterable MongoDbOutputType bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumerType (consumer) Consumer type. String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean cursorRegenerationDelay (advanced) MongoDB tailable cursors will block until new data arrives. If no new data is inserted, after some time the cursor will be automatically freed and closed by the MongoDB server. The client is expected to regenerate the cursor if needed. This value specifies the time to wait before attempting to fetch a new cursor, and if the attempt fails, how long before the attempt is made. Default value is 1000ms. 1000 long dynamicity (advanced) Sets whether this endpoint will attempt to dynamically resolve the target database and collection from the incoming Exchange properties. Can be used to override at runtime the database and collection specified on the otherwise static endpoint URI. It is disabled by default to boost performance. Enabling it will take a minimal performance hit. false boolean readPreference (advanced) Configure how MongoDB clients route read operations to the members of a replica set. Possible values are PRIMARY, PRIMARY_PREFERRED, SECONDARY, SECONDARY_PREFERRED or NEAREST. Enum values: PRIMARY PRIMARY_PREFERRED SECONDARY SECONDARY_PREFERRED NEAREST PRIMARY String writeConcern (advanced) Configure the connection bean with the level of acknowledgment requested from MongoDB for write operations to a standalone mongod, replicaset or cluster. Possible values are ACKNOWLEDGED, W1, W2, W3, UNACKNOWLEDGED, JOURNALED or MAJORITY. Enum values: ACKNOWLEDGED W1 W2 W3 UNACKNOWLEDGED JOURNALED MAJORITY ACKNOWLEDGED String writeResultAsHeader (advanced) In write operations, it determines whether instead of returning WriteResult as the body of the OUT message, we transfer the IN message to the OUT and attach the WriteResult as a header. false boolean streamFilter (changeStream) Filter condition for change streams consumer. String password (security) User password for mongodb connection. String username (security) Username for mongodb connection. String persistentId (tail) One tail tracking collection can host many trackers for several tailable consumers. To keep them separate, each tracker should have its own unique persistentId. String persistentTailTracking (tail) Enable persistent tail tracking, which is a mechanism to keep track of the last consumed message across system restarts. The time the system is up, the endpoint will recover the cursor from the point where it last stopped slurping records. false boolean tailTrackCollection (tail) Collection where tail tracking information will be persisted. If not specified, MongoDbTailTrackingConfig#DEFAULT_COLLECTION will be used by default. String tailTrackDb (tail) Indicates what database the tail tracking mechanism will persist to. If not specified, the current database will be picked by default. Dynamicity will not be taken into account even if enabled, i.e. the tail tracking database will not vary past endpoint initialisation. String tailTrackField (tail) Field where the last tracked value will be placed. If not specified, MongoDbTailTrackingConfig#DEFAULT_FIELD will be used by default. String tailTrackIncreasingField (tail) Correlation field in the incoming record which is of increasing nature and will be used to position the tailing cursor every time it is generated. The cursor will be (re)created with a query of type: tailTrackIncreasingField greater than lastValue (possibly recovered from persistent tail tracking). Can be of type Integer, Date, String, etc. NOTE: No support for dot notation at the current time, so the field should be at the top level of the document. String 95.6. Configuration of database in Spring XML The following Spring XML creates a bean defining the connection to a MongoDB instance. Since mongo java driver 3, the WriteConcern and readPreference options are not dynamically modifiable. They are defined in the mongoClient object <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:mongo="http://www.springframework.org/schema/data/mongo" xsi:schemaLocation="http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/data/mongo http://www.springframework.org/schema/data/mongo/spring-mongo.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <mongo:mongo-client id="mongoBean" host="USD{mongo.url}" port="USD{mongo.port}" credentials="USD{mongo.user}:USD{mongo.pass}@USD{mongo.dbname}"> <mongo:client-options write-concern="NORMAL" /> </mongo:mongo-client> </beans> 95.7. Sample route The following route defined in Spring XML executes the operation getDbStats on a collection. Get DB stats for specified collection <route> <from uri="direct:start" /> <!-- using bean 'mongoBean' defined above --> <to uri="mongodb:mongoBean?database=USD{mongodb.database}&collection=USD{mongodb.collection}&operation=getDbStats" /> <to uri="direct:result" /> </route> 95.8. MongoDB operations - producer endpoints 95.8.1. Query operations 95.8.1.1. findById This operation retrieves only one element from the collection whose _id field matches the content of the IN message body. The incoming object can be anything that has an equivalent to a Bson type. See http://bsonspec.org/spec.html and http://www.mongodb.org/display/DOCS/Java+Types . from("direct:findById") .to("mongodb:myDb?database=flights&collection=tickets&operation=findById") .to("mock:resultFindById"); Please, note that the default _id is treated by Mongo as and ObjectId type, so you may need to convert it properly. from("direct:findById") .convertBodyTo(ObjectId.class) .to("mongodb:myDb?database=flights&collection=tickets&operation=findById") .to("mock:resultFindById"); Note Supports optional parameters This operation supports projection operators. See Specifying a fields filter (projection) . 95.8.1.2. findOneByQuery Retrieve the first element from a collection matching a MongoDB query selector. If the CamelMongoDbCriteria header is set, then its value is used as the query selector . If the CamelMongoDbCriteria header is null , then the IN message body is used as the query selector. In both cases, the query selector should be of type Bson or convertible to Bson (for instance, a JSON string or HashMap ). See Type conversions for more info. Create query selectors using the Filters provided by the MongoDB Driver. 95.8.1.3. Example without a query selector (returns the first document in a collection) from("direct:findOneByQuery") .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery") .to("mock:resultFindOneByQuery"); 95.8.1.4. Example with a query selector (returns the first matching document in a collection): from("direct:findOneByQuery") .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq("name", "Raul Kripalani"))) .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery") .to("mock:resultFindOneByQuery"); Note Supports optional parameters This operation supports projection operators and sort clauses. See Specifying a fields filter (projection) , Specifying a sort clause. 95.8.1.5. findAll The findAll operation returns all documents matching a query, or none at all, in which case all documents contained in the collection are returned. The query object is extracted CamelMongoDbCriteria header . if the CamelMongoDbCriteria header is null the query object is extracted message body, i.e. it should be of type Bson or convertible to Bson . It can be a JSON String or a Hashmap. See Type conversions for more info. 95.8.1.5.1. Example without a query selector (returns all documents in a collection) from("direct:findAll") .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") .to("mock:resultFindAll"); 95.8.1.5.2. Example with a query selector (returns all matching documents in a collection) from("direct:findAll") .setHeader(MongoDbConstants.CRITERIA, Filters.eq("name", "Raul Kripalani")) .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") .to("mock:resultFindAll"); Paging and efficient retrieval is supported via the following headers: Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbNumToSkip MongoDbConstants.NUM_TO_SKIP Discards a given number of elements at the beginning of the cursor. int/Integer CamelMongoDbLimit MongoDbConstants.LIMIT Limits the number of elements returned. int/Integer CamelMongoDbBatchSize MongoDbConstants.BATCH_SIZE Limits the number of elements returned in one batch. A cursor typically fetches a batch of result objects and store them locally. If batchSize is positive, it represents the size of each batch of objects retrieved. It can be adjusted to optimize performance and limit data transfer. If batchSize is negative, it will limit of number objects returned, that fit within the max batch size limit (usually 4MB), and cursor will be closed. For example if batchSize is -10, then the server will return a maximum of 10 documents and as many as can fit in 4MB, then close the cursor. Note that this feature is different from limit() in that documents must fit within a maximum size, and it removes the need to send a request to close the cursor server-side. The batch size can be changed even after a cursor is iterated, in which case the setting will apply on the batch retrieval. int/Integer CamelMongoDbAllowDiskUse MongoDbConstants.ALLOW_DISK_USE Sets allowDiskUse MongoDB flag. This is supported since MongoDB Server 4.3.1. Using this header with older MongoDB Server version can cause query to fail. boolean/Boolean 95.8.1.5.3. Example with option outputType=MongoIterable and batch size from("direct:findAll") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq("name", "Raul Kripalani"))) .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll&outputType=MongoIterable") .to("mock:resultFindAll"); The findAll operation will also return the following OUT headers to enable you to iterate through result pages if you are using paging: Header key Quick constant Description (extracted from MongoDB API doc) Data type CamelMongoDbResultTotalSize MongoDbConstants.RESULT_TOTAL_SIZE Number of objects matching the query. This does not take limit/skip into consideration. int/Integer CamelMongoDbResultPageSize MongoDbConstants.RESULT_PAGE_SIZE Number of objects matching the query. This does not take limit/skip into consideration. int/Integer Note Supports optional parameters This operation supports projection operators and sort clauses. See Specifying a fields filter (projection) , Specifying a sort clause. 95.8.1.6. count Returns the total number of objects in a collection, returning a Long as the OUT message body. The following example will count the number of records in the "dynamicCollectionName" collection. Notice how dynamicity is enabled, and as a result, the operation will not run against the "notableScientists" collection, but against the "dynamicCollectionName" collection. // from("direct:count").to("mongodb:myDb?database=tickets&collection=flights&operation=count&dynamicity=true"); Long result = template.requestBodyAndHeader("direct:count", "irrelevantBody", MongoDbConstants.COLLECTION, "dynamicCollectionName"); assertTrue("Result is not of type Long", result instanceof Long); You can provide a query The query object is extracted CamelMongoDbCriteria header . if the CamelMongoDbCriteria header is null the query object is extracted message body, i.e. it should be of type Bson or convertible to Bson ., and operation will return the amount of documents matching this criteria. Document query = ... Long count = template.requestBodyAndHeader("direct:count", query, MongoDbConstants.COLLECTION, "dynamicCollectionName"); 95.8.1.7. Specifying a fields filter (projection) Query operations will, by default, return the matching objects in their entirety (with all their fields). If your documents are large and you only require retrieving a subset of their fields, you can specify a field filter in all query operations, simply by setting the relevant Bson (or type convertible to Bson , such as a JSON String, Map, etc.) on the CamelMongoDbFieldsProjection header, constant shortcut: MongoDbConstants.FIELDS_PROJECTION . Here is an example that uses MongoDB's Projections to simplify the creation of Bson. It retrieves all fields except _id and boringField : // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") Bson fieldProjection = Projection.exclude("_id", "boringField"); Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection); Here is an example that uses MongoDB's Projections to simplify the creation of Bson. It retrieves all fields except _id and boringField : // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") Bson fieldProjection = Projection.exclude("_id", "boringField"); Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection); 95.8.1.8. Specifying a sort clause There is a often a requirement to fetch the min/max record from a collection based on sorting by a particular field that uses MongoDB's Sorts to simplify the creation of Bson. It retrieves all fields except _id and boringField : // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") Bson sorts = Sorts.descending("_id"); Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.SORT_BY, sorts); In a Camel route the SORT_BY header can be used with the findOneByQuery operation to achieve the same result. If the FIELDS_PROJECTION header is also specified the operation will return a single field/value pair that can be passed directly to another component (for example, a parameterized MyBatis SELECT query). This example demonstrates fetching the temporally newest document from a collection and reducing the result to a single field, based on the documentTimestamp field: .from("direct:someTriggeringEvent") .setHeader(MongoDbConstants.SORT_BY).constant(Sorts.descending("documentTimestamp")) .setHeader(MongoDbConstants.FIELDS_PROJECTION).constant(Projection.include("documentTimestamp")) .setBody().constant("{}") .to("mongodb:myDb?database=local&collection=myDemoCollection&operation=findOneByQuery") .to("direct:aMyBatisParameterizedSelect"); 95.8.2. Create/update operations 95.8.2.1. insert Inserts an new object into the MongoDB collection, taken from the IN message body. Type conversion is attempted to turn it into Document or a List . Two modes are supported: single insert and multiple insert. For multiple insert, the endpoint will expect a List, Array or Collections of objects of any type, as long as they are - or can be converted to - Document . Example: from("direct:insert") .to("mongodb:myDb?database=flights&collection=tickets&operation=insert"); The operation will return a WriteResult, and depending on the WriteConcern or the value of the invokeGetLastError option, getLastError() would have been called already or not. If you want to access the ultimate result of the write operation, you need to retrieve the CommandResult by calling getLastError() or getCachedLastError() on the WriteResult . Then you can verify the result by calling CommandResult.ok() , CommandResult.getErrorMessage() and/or CommandResult.getException() . Note that the new object's _id must be unique in the collection. If you don't specify the value, MongoDB will automatically generate one for you. But if you do specify it and it is not unique, the insert operation will fail (and for Camel to notice, you will need to enable invokeGetLastError or set a WriteConcern that waits for the write result). This is not a limitation of the component, but it is how things work in MongoDB for higher throughput. If you are using a custom _id , you are expected to ensure at the application level that is unique (and this is a good practice too). OID(s) of the inserted record(s) is stored in the message header under CamelMongoOid key ( MongoDbConstants.OID constant). The value stored is org.bson.types.ObjectId for single insert or java.util.List<org.bson.types.ObjectId> if multiple records have been inserted. In MongoDB Java Driver 3.x the insertOne and insertMany operation return void. The Camel insert operation return the Document or List of Documents inserted. Note that each Documents are Updated by a new OID if need. 95.8.2.2. save The save operation is equivalent to an upsert (UPdate, inSERT) operation, where the record will be updated, and if it doesn't exist, it will be inserted, all in one atomic operation. MongoDB will perform the matching based on the _id field. Beware that in case of an update, the object is replaced entirely and the usage of MongoDB's USDmodifiers is not permitted. Therefore, if you want to manipulate the object if it already exists, you have two options: perform a query to retrieve the entire object first along with all its fields (may not be efficient), alter it inside Camel and then save it. use the update operation with USDmodifiers , which will execute the update at the server-side instead. You can enable the upsert flag, in which case if an insert is required, MongoDB will apply the USDmodifiers to the filter query object and insert the result. If the document to be saved does not contain the _id attribute, the operation will be an insert, and the new _id created will be placed in the CamelMongoOid header. For example: from("direct:insert") .to("mongodb:myDb?database=flights&collection=tickets&operation=save"); // route: from("direct:insert").to("mongodb:myDb?database=flights&collection=tickets&operation=save"); org.bson.Document docForSave = new org.bson.Document(); docForSave.put("key", "value"); Object result = template.requestBody("direct:insert", docForSave); 95.8.2.3. update Update one or multiple records on the collection. Requires a filter query and a update rules. You can define the filter using MongoDBConstants.CRITERIA header as Bson and define the update rules as Bson in Body. Note Update after enrich While defining the filter by using MongoDBConstants.CRITERIA header as Bson to query mongodb before you do update, you should notice you need to remove it from the resulting camel exchange during aggregation if you use enrich pattern with a aggregation strategy and then apply mongodb update. If you don't remove this header during aggregation and/or redefine MongoDBConstants.CRITERIA header before sending camel exchange to mongodb producer endpoint, you may end up with invalid camel exchange payload while updating mongodb. The second way Require a List<Bson> as the IN message body containing exactly 2 elements: Element 1 (index 0) ⇒ filter query ⇒ determines what objects will be affected, same as a typical query object Element 2 (index 1) ⇒ update rules ⇒ how matched objects will be updated. All modifier operations from MongoDB are supported. Note Multiupdates By default, MongoDB will only update 1 object even if multiple objects match the filter query. To instruct MongoDB to update all matching records, set the CamelMongoDbMultiUpdate IN message header to true . A header with key CamelMongoDbRecordsAffected will be returned ( MongoDbConstants.RECORDS_AFFECTED constant) with the number of records updated (copied from WriteResult.getN() ). Supports the following IN message headers: Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbMultiUpdate MongoDbConstants.MULTIUPDATE If the update should be applied to all objects matching. See http://www.mongodb.org/display/DOCS/Atomic+Operations boolean/Boolean CamelMongoDbUpsert MongoDbConstants.UPSERT If the database should create the element if it does not exist boolean/Boolean For example, the following will update all records whose filterField field equals true by setting the value of the "scientist" field to "Darwin": // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); List<Bson> body = new ArrayList<>(); Bson filterField = Filters.eq("filterField", true); body.add(filterField); BsonDocument updateObj = new BsonDocument().append("USDset", new BsonDocument("scientist", new BsonString("Darwin"))); body.add(updateObj); Object result = template.requestBodyAndHeader("direct:update", body, MongoDbConstants.MULTIUPDATE, true); // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); Maps<String, Object> headers = new HashMap<>(2); headers.add(MongoDbConstants.MULTIUPDATE, true); headers.add(MongoDbConstants.FIELDS_FILTER, Filters.eq("filterField", true)); String updateObj = Updates.set("scientist", "Darwin");; Object result = template.requestBodyAndHeaders("direct:update", updateObj, headers); // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); String updateObj = "[{\"filterField\": true}, {\"USDset\", {\"scientist\", \"Darwin\"}}]"; Object result = template.requestBodyAndHeader("direct:update", updateObj, MongoDbConstants.MULTIUPDATE, true); 95.8.3. Delete operations 95.8.3.1. remove Remove matching records from the collection. The IN message body will act as the removal filter query, and is expected to be of type DBObject or a type convertible to it. The following example will remove all objects whose field 'conditionField' equals true, in the science database, notableScientists collection: // route: from("direct:remove").to("mongodb:myDb?database=science&collection=notableScientists&operation=remove"); Bson conditionField = Filters.eq("conditionField", true); Object result = template.requestBody("direct:remove", conditionField); A header with key CamelMongoDbRecordsAffected is returned ( MongoDbConstants.RECORDS_AFFECTED constant) with type int , containing the number of records deleted (copied from WriteResult.getN() ). 95.8.4. Bulk Write Operations 95.8.4.1. bulkWrite Performs write operations in bulk with controls for order of execution. Requires a List<WriteModel<Document>> as the IN message body containing commands for insert, update, and delete operations. The following example will insert a new scientist "Pierre Curie", update record with id "5" by setting the value of the "scientist" field to "Marie Curie" and delete record with id "3" : // route: from("direct:bulkWrite").to("mongodb:myDb?database=science&collection=notableScientists&operation=bulkWrite"); List<WriteModel<Document>> bulkOperations = Arrays.asList( new InsertOneModel<>(new Document("scientist", "Pierre Curie")), new UpdateOneModel<>(new Document("_id", "5"), new Document("USDset", new Document("scientist", "Marie Curie"))), new DeleteOneModel<>(new Document("_id", "3"))); BulkWriteResult result = template.requestBody("direct:bulkWrite", bulkOperations, BulkWriteResult.class); By default, operations are executed in order and interrupted on the first write error without processing any remaining write operations in the list. To instruct MongoDB to continue to process remaining write operations in the list, set the CamelMongoDbBulkOrdered IN message header to false . Unordered operations are executed in parallel and this behavior is not guaranteed. Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbBulkOrdered MongoDbConstants.BULK_ORDERED Perform an ordered or unordered operation execution. Defaults to true. boolean/Boolean 95.8.5. Other operations 95.8.5.1. aggregate Perform a aggregation with the given pipeline contained in the body. Aggregations could be long and heavy operations. Use with care. // route: from("direct:aggregate").to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate"); List<Bson> aggregate = Arrays.asList(match(or(eq("scientist", "Darwin"), eq("scientist", group("USDscientist", sum("count", 1))); from("direct:aggregate") .setBody().constant(aggregate) .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate") .to("mock:resultAggregate"); Supports the following IN message headers: Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbBatchSize MongoDbConstants.BATCH_SIZE Sets the number of documents to return per batch. int/Integer CamelMongoDbAllowDiskUse MongoDbConstants.ALLOW_DISK_USE Enable aggregation pipeline stages to write data to temporary files. boolean/Boolean By default a List of all results is returned. This can be heavy on memory depending on the size of the results. A safer alternative is to set your outputType=MongoIterable. The Processor will see an iterable in the message body allowing it to step through the results one by one. Thus setting a batch size and returning an iterable allows for efficient retrieval and processing of the result. An example would look like: List<Bson> aggregate = Arrays.asList(match(or(eq("scientist", "Darwin"), eq("scientist", group("USDscientist", sum("count", 1))); from("direct:aggregate") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setBody().constant(aggregate) .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate&outputType=MongoIterable") .split(body()) .streaming() .to("mock:resultAggregate"); Note that calling .split(body()) is enough to send the entries down the route one-by-one, however it would still load all the entries into memory first. Calling .streaming() is thus required to load data into memory by batches. 95.8.5.2. getDbStats Equivalent of running the db.stats() command in the MongoDB shell, which displays useful statistic figures about the database. For example: Usage example: // from("direct:getDbStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getDbStats"); Object result = template.requestBody("direct:getDbStats", "irrelevantBody"); assertTrue("Result is not of type Document", result instanceof Document); The operation will return a data structure similar to the one displayed in the shell, in the form of a Document in the OUT message body. 95.8.5.3. getColStats Equivalent of running the db.collection.stats() command in the MongoDB shell, which displays useful statistic figures about the collection. For example: Usage example: // from("direct:getColStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getColStats"); Object result = template.requestBody("direct:getColStats", "irrelevantBody"); assertTrue("Result is not of type Document", result instanceof Document); The operation will return a data structure similar to the one displayed in the shell, in the form of a Document in the OUT message body. 95.8.5.4. command Run the body as a command on database. Useful for admin operation as getting host information, replication or sharding status. Collection parameter is not use for this operation. // route: from("command").to("mongodb:myDb?database=science&operation=command"); DBObject commandBody = new BasicDBObject("hostInfo", "1"); Object result = template.requestBody("direct:command", commandBody); 95.8.6. Dynamic operations An Exchange can override the endpoint's fixed operation by setting the CamelMongoDbOperation header, defined by the MongoDbConstants.OPERATION_HEADER constant. The values supported are determined by the MongoDbOperation enumeration and match the accepted values for the operation parameter on the endpoint URI. For example: // from("direct:insert").to("mongodb:myDb?database=flights&collection=tickets&operation=insert"); Object result = template.requestBodyAndHeader("direct:insert", "irrelevantBody", MongoDbConstants.OPERATION_HEADER, "count"); assertTrue("Result is not of type Long", result instanceof Long); 95.9. Consumers There are several types of consumers: Tailable Cursor Consumer Change Streams Consumer 95.9.1. Tailable Cursor Consumer MongoDB offers a mechanism to instantaneously consume ongoing data from a collection, by keeping the cursor open just like the tail -f command of *nix systems. This mechanism is significantly more efficient than a scheduled poll, due to the fact that the server pushes new data to the client as it becomes available, rather than making the client ping back at scheduled intervals to fetch new data. It also reduces otherwise redundant network traffic. There is only one requisite to use tailable cursors: the collection must be a "capped collection", meaning that it will only hold N objects, and when the limit is reached, MongoDB flushes old objects in the same order they were originally inserted. For more information, please refer to http://www.mongodb.org/display/DOCS/Tailable+Cursors . The Camel MongoDB component implements a tailable cursor consumer, making this feature available for you to use in your Camel routes. As new objects are inserted, MongoDB will push them as Document in natural order to your tailable cursor consumer, who will transform them to an Exchange and will trigger your route logic. 95.10. How the tailable cursor consumer works To turn a cursor into a tailable cursor, a few special flags are to be signalled to MongoDB when first generating the cursor. Once created, the cursor will then stay open and will block upon calling the MongoCursor.() method until new data arrives. However, the MongoDB server reserves itself the right to kill your cursor if new data doesn't appear after an indeterminate period. If you are interested to continue consuming new data, you have to regenerate the cursor. And to do so, you will have to remember the position where you left off or else you will start consuming from the top again. The Camel MongoDB tailable cursor consumer takes care of all these tasks for you. You will just need to provide the key to some field in your data of increasing nature, which will act as a marker to position your cursor every time it is regenerated, e.g. a timestamp, a sequential ID, etc. It can be of any datatype supported by MongoDB. Date, Strings and Integers are found to work well. We call this mechanism "tail tracking" in the context of this component. The consumer will remember the last value of this field and whenever the cursor is to be regenerated, it will run the query with a filter like: increasingField > lastValue , so that only unread data is consumed. Setting the increasing field: Set the key of the increasing field on the endpoint URI tailTrackingIncreasingField option. In Camel 2.10, it must be a top-level field in your data, as nested navigation for this field is not yet supported. That is, the "timestamp" field is okay, but "nested.timestamp" will not work. Please open a ticket in the Camel JIRA if you do require support for nested increasing fields. Cursor regeneration delay: One thing to note is that if new data is not already available upon initialisation, MongoDB will kill the cursor instantly. Since we don't want to overwhelm the server in this case, a cursorRegenerationDelay option has been introduced (with a default value of 1000ms.), which you can modify to suit your needs. An example: from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime") .id("tailableCursorConsumer1") .autoStartup(false) .to("mock:test"); The above route will consume from the "flights.cancellations" capped collection, using "departureTime" as the increasing field, with a default regeneration cursor delay of 1000ms. 95.11. Persistent tail tracking Standard tail tracking is volatile and the last value is only kept in memory. However, in practice you will need to restart your Camel container every now and then, but your last value would then be lost and your tailable cursor consumer would start consuming from the top again, very likely sending duplicate records into your route. To overcome this situation, you can enable the persistent tail tracking feature to keep track of the last consumed increasing value in a special collection inside your MongoDB database too. When the consumer initialises again, it will restore the last tracked value and continue as if nothing happened. The last read value is persisted on two occasions: every time the cursor is regenerated and when the consumer shuts down. We may consider persisting at regular intervals too in the future (flush every 5 seconds) for added robustness if the demand is there. To request this feature, please open a ticket in the Camel JIRA. 95.12. Enabling persistent tail tracking To enable this function, set at least the following options on the endpoint URI: persistentTailTracking option to true persistentId option to a unique identifier for this consumer, so that the same collection can be reused across many consumers Additionally, you can set the tailTrackDb , tailTrackCollection and tailTrackField options to customise where the runtime information will be stored. Refer to the endpoint options table at the top of this page for descriptions of each option. For example, the following route will consume from the "flights.cancellations" capped collection, using "departureTime" as the increasing field, with a default regeneration cursor delay of 1000ms, with persistent tail tracking turned on, and persisting under the "cancellationsTracker" id on the "flights.camelTailTracking", storing the last processed value under the "lastTrackingValue" field ( camelTailTracking and lastTrackingValue are defaults). from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + "&persistentId=cancellationsTracker") .id("tailableCursorConsumer2") .autoStartup(false) .to("mock:test"); Below is another example identical to the one above, but where the persistent tail tracking runtime information will be stored under the "trackers.camelTrackers" collection, in the "lastProcessedDepartureTime" field: from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + "&persistentId=cancellationsTracker&tailTrackDb=trackers&tailTrackCollection=camelTrackers" + "&tailTrackField=lastProcessedDepartureTime") .id("tailableCursorConsumer3") .autoStartup(false) .to("mock:test"); 95.12.1. Change Streams Consumer Change Streams allow applications to access real-time data changes without the complexity and risk of tailing the MongoDB oplog. Applications can use change streams to subscribe to all data changes on a collection and immediately react to them. Because change streams use the aggregation framework, applications can also filter for specific changes or transform the notifications at will. The exchange body will contain the full document of any change. To configure Change Streams Consumer you need to specify consumerType , database , collection and optional JSON property streamFilter to filter events. That JSON property is standard MongoDB USDmatch aggregation. It could be easily specified using XML DSL configuration: <route id="filterConsumer"> <from uri="mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ 'USDmatch':{'USDor':[{'fullDocument.stringValue': 'specificValue'}]} }"/> <to uri="mock:test"/> </route> Java configuration: from("mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ 'USDmatch':{'USDor':[{'fullDocument.stringValue': 'specificValue'}]} }") .to("mock:test"); Note You can externalize the streamFilter value into a property placeholder which allows the endpoint URI parameters to be cleaner and easier to read. The changeStreams consumer type will also return the following OUT headers: Header key Quick constant Description (extracted from MongoDB API doc) Data type CamelMongoDbStreamOperationType MongoDbConstants.STREAM_OPERATION_TYPE The type of operation that occurred. Can be any of the following values: insert, delete, replace, update, drop, rename, dropDatabase, invalidate. String _id MongoDbConstants.MONGO_ID A document that contains the _id of the document created or modified by the insert, replace, delete, update operations (i.e. CRUD operations). For sharded collections, also displays the full shard key for the document. The _id field is not repeated if it is already a part of the shard key. ObjectId 95.13. Type conversions The MongoDbBasicConverters type converter included with the camel-mongodb component provides the following conversions: Name From type To type How? fromMapToDocument Map Document constructs a new Document via the new Document(Map m) constructor. fromDocumentToMap Document Map Document already implements Map . fromStringToDocument String Document uses com.mongodb.Document.parse(String s) . fromStringToObjectId String ObjectId constructs a new ObjectId via the new ObjectId(s) fromFileToDocument File Document uses fromInputStreamToDocument under the hood fromInputStreamToDocument InputStream Document converts the inputstream bytes to a Document fromStringToList String List<Bson> uses org.bson.codecs.configuration.CodecRegistries to convert to BsonArray then to List<Bson>. This type converter is auto-discovered, so you don't need to configure anything manually. 95.14. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.mongodb.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mongodb.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.mongodb.enabled Whether to enable auto configuration of the mongodb component. This is enabled by default. Boolean camel.component.mongodb.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mongodb.mongo-connection Shared client used for connection. All endpoints generated from the component will share this connection client. The option is a com.mongodb.client.MongoClient type. MongoClient | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mongodb-starter</artifactId> </dependency>",
"mongodb:connectionBean?database=databaseName&collection=collectionName&operation=operationName[&moreOptions...]",
"mongodb:connectionBean",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:context=\"http://www.springframework.org/schema/context\" xmlns:mongo=\"http://www.springframework.org/schema/data/mongo\" xsi:schemaLocation=\"http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/data/mongo http://www.springframework.org/schema/data/mongo/spring-mongo.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd\"> <mongo:mongo-client id=\"mongoBean\" host=\"USD{mongo.url}\" port=\"USD{mongo.port}\" credentials=\"USD{mongo.user}:USD{mongo.pass}@USD{mongo.dbname}\"> <mongo:client-options write-concern=\"NORMAL\" /> </mongo:mongo-client> </beans>",
"<route> <from uri=\"direct:start\" /> <!-- using bean 'mongoBean' defined above --> <to uri=\"mongodb:mongoBean?database=USD{mongodb.database}&collection=USD{mongodb.collection}&operation=getDbStats\" /> <to uri=\"direct:result\" /> </route>",
"from(\"direct:findById\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findById\") .to(\"mock:resultFindById\");",
"from(\"direct:findById\") .convertBodyTo(ObjectId.class) .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findById\") .to(\"mock:resultFindById\");",
"from(\"direct:findOneByQuery\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery\") .to(\"mock:resultFindOneByQuery\");",
"from(\"direct:findOneByQuery\") .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq(\"name\", \"Raul Kripalani\"))) .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery\") .to(\"mock:resultFindOneByQuery\");",
"from(\"direct:findAll\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") .to(\"mock:resultFindAll\");",
"from(\"direct:findAll\") .setHeader(MongoDbConstants.CRITERIA, Filters.eq(\"name\", \"Raul Kripalani\")) .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") .to(\"mock:resultFindAll\");",
"from(\"direct:findAll\") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq(\"name\", \"Raul Kripalani\"))) .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll&outputType=MongoIterable\") .to(\"mock:resultFindAll\");",
"// from(\"direct:count\").to(\"mongodb:myDb?database=tickets&collection=flights&operation=count&dynamicity=true\"); Long result = template.requestBodyAndHeader(\"direct:count\", \"irrelevantBody\", MongoDbConstants.COLLECTION, \"dynamicCollectionName\"); assertTrue(\"Result is not of type Long\", result instanceof Long);",
"Document query = Long count = template.requestBodyAndHeader(\"direct:count\", query, MongoDbConstants.COLLECTION, \"dynamicCollectionName\");",
"// route: from(\"direct:findAll\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") Bson fieldProjection = Projection.exclude(\"_id\", \"boringField\"); Object result = template.requestBodyAndHeader(\"direct:findAll\", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection);",
"// route: from(\"direct:findAll\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") Bson fieldProjection = Projection.exclude(\"_id\", \"boringField\"); Object result = template.requestBodyAndHeader(\"direct:findAll\", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection);",
"// route: from(\"direct:findAll\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") Bson sorts = Sorts.descending(\"_id\"); Object result = template.requestBodyAndHeader(\"direct:findAll\", ObjectUtils.NULL, MongoDbConstants.SORT_BY, sorts);",
".from(\"direct:someTriggeringEvent\") .setHeader(MongoDbConstants.SORT_BY).constant(Sorts.descending(\"documentTimestamp\")) .setHeader(MongoDbConstants.FIELDS_PROJECTION).constant(Projection.include(\"documentTimestamp\")) .setBody().constant(\"{}\") .to(\"mongodb:myDb?database=local&collection=myDemoCollection&operation=findOneByQuery\") .to(\"direct:aMyBatisParameterizedSelect\");",
"from(\"direct:insert\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=insert\");",
"from(\"direct:insert\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=save\");",
"// route: from(\"direct:insert\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=save\"); org.bson.Document docForSave = new org.bson.Document(); docForSave.put(\"key\", \"value\"); Object result = template.requestBody(\"direct:insert\", docForSave);",
"// route: from(\"direct:update\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=update\"); List<Bson> body = new ArrayList<>(); Bson filterField = Filters.eq(\"filterField\", true); body.add(filterField); BsonDocument updateObj = new BsonDocument().append(\"USDset\", new BsonDocument(\"scientist\", new BsonString(\"Darwin\"))); body.add(updateObj); Object result = template.requestBodyAndHeader(\"direct:update\", body, MongoDbConstants.MULTIUPDATE, true);",
"// route: from(\"direct:update\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=update\"); Maps<String, Object> headers = new HashMap<>(2); headers.add(MongoDbConstants.MULTIUPDATE, true); headers.add(MongoDbConstants.FIELDS_FILTER, Filters.eq(\"filterField\", true)); String updateObj = Updates.set(\"scientist\", \"Darwin\");; Object result = template.requestBodyAndHeaders(\"direct:update\", updateObj, headers);",
"// route: from(\"direct:update\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=update\"); String updateObj = \"[{\\\"filterField\\\": true}, {\\\"USDset\\\", {\\\"scientist\\\", \\\"Darwin\\\"}}]\"; Object result = template.requestBodyAndHeader(\"direct:update\", updateObj, MongoDbConstants.MULTIUPDATE, true);",
"// route: from(\"direct:remove\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=remove\"); Bson conditionField = Filters.eq(\"conditionField\", true); Object result = template.requestBody(\"direct:remove\", conditionField);",
"// route: from(\"direct:bulkWrite\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=bulkWrite\"); List<WriteModel<Document>> bulkOperations = Arrays.asList( new InsertOneModel<>(new Document(\"scientist\", \"Pierre Curie\")), new UpdateOneModel<>(new Document(\"_id\", \"5\"), new Document(\"USDset\", new Document(\"scientist\", \"Marie Curie\"))), new DeleteOneModel<>(new Document(\"_id\", \"3\"))); BulkWriteResult result = template.requestBody(\"direct:bulkWrite\", bulkOperations, BulkWriteResult.class);",
"// route: from(\"direct:aggregate\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate\"); List<Bson> aggregate = Arrays.asList(match(or(eq(\"scientist\", \"Darwin\"), eq(\"scientist\", group(\"USDscientist\", sum(\"count\", 1))); from(\"direct:aggregate\") .setBody().constant(aggregate) .to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate\") .to(\"mock:resultAggregate\");",
"List<Bson> aggregate = Arrays.asList(match(or(eq(\"scientist\", \"Darwin\"), eq(\"scientist\", group(\"USDscientist\", sum(\"count\", 1))); from(\"direct:aggregate\") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setBody().constant(aggregate) .to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate&outputType=MongoIterable\") .split(body()) .streaming() .to(\"mock:resultAggregate\");",
"> db.stats(); { \"db\" : \"test\", \"collections\" : 7, \"objects\" : 719, \"avgObjSize\" : 59.73296244784423, \"dataSize\" : 42948, \"storageSize\" : 1000058880, \"numExtents\" : 9, \"indexes\" : 4, \"indexSize\" : 32704, \"fileSize\" : 1275068416, \"nsSizeMB\" : 16, \"ok\" : 1 }",
"// from(\"direct:getDbStats\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=getDbStats\"); Object result = template.requestBody(\"direct:getDbStats\", \"irrelevantBody\"); assertTrue(\"Result is not of type Document\", result instanceof Document);",
"> db.camelTest.stats(); { \"ns\" : \"test.camelTest\", \"count\" : 100, \"size\" : 5792, \"avgObjSize\" : 57.92, \"storageSize\" : 20480, \"numExtents\" : 2, \"nindexes\" : 1, \"lastExtentSize\" : 16384, \"paddingFactor\" : 1, \"flags\" : 1, \"totalIndexSize\" : 8176, \"indexSizes\" : { \"_id_\" : 8176 }, \"ok\" : 1 }",
"// from(\"direct:getColStats\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=getColStats\"); Object result = template.requestBody(\"direct:getColStats\", \"irrelevantBody\"); assertTrue(\"Result is not of type Document\", result instanceof Document);",
"// route: from(\"command\").to(\"mongodb:myDb?database=science&operation=command\"); DBObject commandBody = new BasicDBObject(\"hostInfo\", \"1\"); Object result = template.requestBody(\"direct:command\", commandBody);",
"// from(\"direct:insert\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=insert\"); Object result = template.requestBodyAndHeader(\"direct:insert\", \"irrelevantBody\", MongoDbConstants.OPERATION_HEADER, \"count\"); assertTrue(\"Result is not of type Long\", result instanceof Long);",
"from(\"mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime\") .id(\"tailableCursorConsumer1\") .autoStartup(false) .to(\"mock:test\");",
"from(\"mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true\" + \"&persistentId=cancellationsTracker\") .id(\"tailableCursorConsumer2\") .autoStartup(false) .to(\"mock:test\");",
"from(\"mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true\" + \"&persistentId=cancellationsTracker&tailTrackDb=trackers&tailTrackCollection=camelTrackers\" + \"&tailTrackField=lastProcessedDepartureTime\") .id(\"tailableCursorConsumer3\") .autoStartup(false) .to(\"mock:test\");",
"<route id=\"filterConsumer\"> <from uri=\"mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ 'USDmatch':{'USDor':[{'fullDocument.stringValue': 'specificValue'}]} }\"/> <to uri=\"mock:test\"/> </route>",
"from(\"mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ 'USDmatch':{'USDor':[{'fullDocument.stringValue': 'specificValue'}]} }\") .to(\"mock:test\");"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mongodb-component-starter |
Chapter 6. Configuring a network bridge | Chapter 6. Configuring a network bridge A network bridge is a link-layer device which forwards traffic between networks based on a table of MAC addresses. The bridge builds the MAC addresses table by listening to network traffic and thereby learning what hosts are connected to each network. For example, you can use a software bridge on a Red Hat Enterprise Linux host to emulate a hardware bridge or in virtualization environments, to integrate virtual machines (VM) to the same network as the host. A bridge requires a network device in each network the bridge should connect. When you configure a bridge, the bridge is called controller and the devices it uses ports . You can create bridges on different types of devices, such as: Physical and virtual Ethernet devices Network bonds Network teams VLAN devices Due to the IEEE 802.11 standard which specifies the use of 3-address frames in Wi-Fi for the efficient use of airtime, you cannot configure a bridge over Wi-Fi networks operating in Ad-Hoc or Infrastructure modes. 6.1. Configuring a network bridge by using nmcli To configure a network bridge on the command line, use the nmcli utility. Prerequisites Two or more physical or virtual network devices are installed on the server. To use Ethernet devices as ports of the bridge, the physical or virtual Ethernet devices must be installed on the server. To use team, bond, or VLAN devices as ports of the bridge, you can either create these devices while you create the bridge or you can create them in advance as described in: Configuring a network team by using nmcli Configuring a network bond by using nmcli Configuring VLAN tagging by using nmcli Procedure Create a bridge interface: This command creates a bridge named bridge0 , enter: Display the network interfaces, and note the names of the interfaces you want to add to the bridge: In this example: enp7s0 and enp8s0 are not configured. To use these devices as ports, add connection profiles in the step. bond0 and bond1 have existing connection profiles. To use these devices as ports, modify their profiles in the step. Assign the interfaces to the bridge. If the interfaces you want to assign to the bridge are not configured, create new connection profiles for them: These commands create profiles for enp7s0 and enp8s0 , and add them to the bridge0 connection. If you want to assign an existing connection profile to the bridge: Set the master parameter of these connections to bridge0 : These commands assign the existing connection profiles named bond0 and bond1 to the bridge0 connection. Reactivate the connections: Configure the IPv4 settings: If you plan to use this bridge device as a port of other devices, enter: To use DHCP, no action is required. To set a static IPv4 address, network mask, default gateway, and DNS server to the bridge0 connection, enter: Configure the IPv6 settings: If you plan to use this bridge device as a port of other devices, enter: To use stateless address autoconfiguration (SLAAC), no action is required. To set a static IPv6 address, network mask, default gateway, and DNS server to the bridge0 connection, enter: Optional: Configure further properties of the bridge. For example, to set the Spanning Tree Protocol (STP) priority of bridge0 to 16384 , enter: By default, STP is enabled. Activate the connection: Verify that the ports are connected, and the CONNECTION column displays the port's connection name: When you activate any port of the connection, NetworkManager also activates the bridge, but not the other ports of it. You can configure that Red Hat Enterprise Linux enables all ports automatically when the bridge is enabled: Enable the connection.autoconnect-slaves parameter of the bridge connection: Reactivate the bridge: Verification Use the ip utility to display the link status of Ethernet devices that are ports of a specific bridge: Use the bridge utility to display the status of Ethernet devices that are ports of any bridge device: To display the status for a specific Ethernet device, use the bridge link show dev <ethernet_device_name> command. Additional resources bridge(8) and nm-settings(5) man pages on your system NetworkManager duplicates a connection after restart of NetworkManager service (Red Hat Knowledgebase) How to configure a bridge with VLAN information? (Red Hat Knowledgebase) 6.2. Configuring a network bridge by using the RHEL web console Use the RHEL web console to configure a network bridge if you prefer to manage network settings using a web browser-based interface. Prerequisites Two or more physical or virtual network devices are installed on the server. To use Ethernet devices as ports of the bridge, the physical or virtual Ethernet devices must be installed on the server. To use team, bond, or VLAN devices as ports of the bridge, you can either create these devices while you create the bridge or you can create them in advance as described in: Configuring a network team using the RHEL web console Configuring a network bond by using the RHEL web console Configuring VLAN tagging by using the RHEL web console You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Select the Networking tab in the navigation on the left side of the screen. Click Add bridge in the Interfaces section. Enter the name of the bridge device you want to create. Select the interfaces that should be ports of the bridge. Optional: Enable the Spanning tree protocol (STP) feature to avoid bridge loops and broadcast radiation. Click Apply . By default, the bridge uses a dynamic IP address. If you want to set a static IP address: Click the name of the bridge in the Interfaces section. Click Edit to the protocol you want to configure. Select Manual to Addresses , and enter the IP address, prefix, and default gateway. In the DNS section, click the + button, and enter the IP address of the DNS server. Repeat this step to set multiple DNS servers. In the DNS search domains section, click the + button, and enter the search domain. If the interface requires static routes, configure them in the Routes section. Click Apply Verification Select the Networking tab in the navigation on the left side of the screen, and check if there is incoming and outgoing traffic on the interface: 6.3. Configuring a network bridge by using nmtui The nmtui application provides a text-based user interface for NetworkManager. You can use nmtui to configure a network bridge on a host without a graphical interface. Note In nmtui : Navigate by using the cursor keys. Press a button by selecting it and hitting Enter . Select and clear checkboxes by using Space . To return to the screen, use ESC . Prerequisites Two or more physical or virtual network devices are installed on the server. To use Ethernet devices as ports of the bridge, the physical or virtual Ethernet devices must be installed on the server. Procedure If you do not know the network device names on which you want configure a network bridge, display the available devices: Start nmtui : Select Edit a connection , and press Enter . Press Add . Select Bridge from the list of network types, and press Enter . Optional: Enter a name for the NetworkManager profile to be created. On hosts with multiple profiles, a meaningful name makes it easier to identify the purpose of a profile. Enter the bridge device name to be created into the Device field. Add ports to the bridge to be created: Press Add to the Slaves list. Select the type of the interface you want to add as port to the bridge, for example, Ethernet . Optional: Enter a name for the NetworkManager profile to be created for this bridge port. Enter the port's device name into the Device field. Press OK to return to the window with the bridge settings. Figure 6.1. Adding an Ethernet device as port to a bridge Repeat these steps to add more ports to the bridge. Depending on your environment, configure the IP address settings in the IPv4 configuration and IPv6 configuration areas accordingly. For this, press the button to these areas, and select: Disabled , if the bridge does not require an IP address. Automatic , if a DHCP server or stateless address autoconfiguration (SLAAC) dynamically assigns an IP address to the bridge. Manual , if the network requires static IP address settings. In this case, you must fill further fields: Press Show to the protocol you want to configure to display additional fields. Press Add to Addresses , and enter the IP address and the subnet mask in Classless Inter-Domain Routing (CIDR) format. If you do not specify a subnet mask, NetworkManager sets a /32 subnet mask for IPv4 addresses and /64 for IPv6 addresses. Enter the address of the default gateway. Press Add to DNS servers , and enter the DNS server address. Press Add to Search domains , and enter the DNS search domain. Figure 6.2. Example of a bridge connection without IP address settings Press OK to create and automatically activate the new connection. Press Back to return to the main menu. Select Quit , and press Enter to close the nmtui application. Verification Use the ip utility to display the link status of Ethernet devices that are ports of a specific bridge: Use the bridge utility to display the status of Ethernet devices that are ports of any bridge device: To display the status for a specific Ethernet device, use the bridge link show dev <ethernet_device_name> command. 6.4. Configuring a network bridge by using nm-connection-editor If you use Red Hat Enterprise Linux with a graphical interface, you can configure network bridges using the nm-connection-editor application. Note that nm-connection-editor can add only new ports to a bridge. To use an existing connection profile as a port, create the bridge using the nmcli utility as described in Configuring a network bridge by using nmcli . Prerequisites Two or more physical or virtual network devices are installed on the server. To use Ethernet devices as ports of the bridge, the physical or virtual Ethernet devices must be installed on the server. To use team, bond, or VLAN devices as ports of the bridge, ensure that these devices are not already configured. Procedure Open a terminal, and enter nm-connection-editor : Click the + button to add a new connection. Select the Bridge connection type, and click Create . On the Bridge tab: Optional: Set the name of the bridge interface in the Interface name field. Click the Add button to create a new connection profile for a network interface and adding the profile as a port to the bridge. Select the connection type of the interface. For example, select Ethernet for a wired connection. Optional: Set a connection name for the port device. If you create a connection profile for an Ethernet device, open the Ethernet tab, and select in the Device field the network interface you want to add as a port to the bridge. If you selected a different device type, configure it accordingly. Click Save . Repeat the step for each interface you want to add to the bridge. Optional: Configure further bridge settings, such as Spanning Tree Protocol (STP) options. Configure the IP address settings on both the IPv4 Settings and IPv6 Settings tabs: If you plan to use this bridge device as a port of other devices, set the Method field to Disabled . To use DHCP, leave the Method field at its default, Automatic (DHCP) . To use static IP settings, set the Method field to Manual and fill the fields accordingly: Click Save . Close nm-connection-editor . Verification Use the ip utility to display the link status of Ethernet devices that are ports of a specific bridge. Use the bridge utility to display the status of Ethernet devices that are ports in any bridge device: To display the status for a specific Ethernet device, use the bridge link show dev ethernet_device_name command. Additional resources Configuring a network bond by using nm-connection-editor Configuring a network team by using nm-connection-editor Configuring VLAN tagging by using nm-connection-editor Configuring NetworkManager to avoid using a specific profile to provide a default gateway How to configure a bridge with VLAN information? (Red Hat Knowledgebase) 6.5. Configuring a network bridge by using nmstatectl Use the nmstatectl utility to configure a network bridge through the Nmstate API. The Nmstate API ensures that, after setting the configuration, the result matches the configuration file. If anything fails, nmstatectl automatically rolls back the changes to avoid leaving the system in an incorrect state. Depending on your environment, adjust the YAML file accordingly. For example, to use different devices than Ethernet adapters in the bridge, adapt the base-iface attribute and type attributes of the ports you use in the bridge. Prerequisites Two or more physical or virtual network devices are installed on the server. To use Ethernet devices as ports in the bridge, the physical or virtual Ethernet devices must be installed on the server. To use team, bond, or VLAN devices as ports in the bridge, set the interface name in the port list, and define the corresponding interfaces. The nmstate package is installed. Procedure Create a YAML file, for example ~/create-bridge.yml , with the following content: --- interfaces: - name: bridge0 type: linux-bridge state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false bridge: options: stp: enabled: true port: - name: enp1s0 - name: enp7s0 - name: enp1s0 type: ethernet state: up - name: enp7s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 -hop-address: 192.0.2.254 -hop-interface: bridge0 - destination: ::/0 -hop-address: 2001:db8:1::fffe -hop-interface: bridge0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb These settings define a network bridge with the following settings: Network interfaces in the bridge: enp1s0 and enp7s0 Spanning Tree Protocol (STP): Enabled Static IPv4 address: 192.0.2.1 with the /24 subnet mask Static IPv6 address: 2001:db8:1::1 with the /64 subnet mask IPv4 default gateway: 192.0.2.254 IPv6 default gateway: 2001:db8:1::fffe IPv4 DNS server: 192.0.2.200 IPv6 DNS server: 2001:db8:1::ffbb DNS search domain: example.com Apply the settings to the system: Verification Display the status of the devices and connections: Display all settings of the connection profile: Display the connection settings in YAML format: Additional resources nmstatectl(8) man page on your system /usr/share/doc/nmstate/examples/ directory How to configure a bridge with VLAN information? (Red Hat Knowledgebase) 6.6. Configuring a network bridge by using the network RHEL system role You can connect multiple networks on layer 2 of the Open Systems Interconnection (OSI) model by creating a network bridge. To configure a bridge, create a connection profile in NetworkManager. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure a bridge and, if a connection profile for the bridge's parent device does not exist, the role can create it as well. Note If you want to assign IP addresses, gateways, and DNS settings to a bridge, configure them on the bridge and not on its ports. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Two or more physical or virtual network devices are installed on the server. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bridge connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bridge profile - name: bridge0 type: bridge interface_name: bridge0 ip: dhcp4: yes auto6: yes state: up # Port profile for the 1st Ethernet device - name: bridge0-port1 interface_name: enp7s0 type: ethernet controller: bridge0 port_type: bridge state: up # Port profile for the 2nd Ethernet device - name: bridge0-port2 interface_name: enp8s0 type: ethernet controller: bridge0 port_type: bridge state: up The settings specified in the example playbook include the following: type: <profile_type> Sets the type of the profile to create. The example playbook creates three connection profiles: One for the bridge and two for the Ethernet devices. dhcp4: yes Enables automatic IPv4 address assignment from DHCP, PPP, or similar services. auto6: yes Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the managed flag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the link status of Ethernet devices that are ports of a specific bridge: Display the status of Ethernet devices that are ports of any bridge device: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory | [
"nmcli connection add type bridge con-name bridge0 ifname bridge0",
"nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet disconnected -- enp8s0 ethernet disconnected -- bond0 bond connected bond0 bond1 bond connected bond1",
"nmcli connection add type ethernet slave-type bridge con-name bridge0-port1 ifname enp7s0 master bridge0 nmcli connection add type ethernet slave-type bridge con-name bridge0-port2 ifname enp8s0 master bridge0",
"nmcli connection modify bond0 master bridge0 nmcli connection modify bond1 master bridge0",
"nmcli connection up bond0 nmcli connection up bond1",
"nmcli connection modify bridge0 ipv4.method disabled",
"nmcli connection modify bridge0 ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253' ipv4.dns-search 'example.com' ipv4.method manual",
"nmcli connection modify bridge0 ipv6.method disabled",
"nmcli connection modify bridge0 ipv6.addresses '2001:db8:1::1/64' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd' ipv6.dns-search 'example.com' ipv6.method manual",
"nmcli connection modify bridge0 bridge.priority '16384'",
"nmcli connection up bridge0",
"nmcli device DEVICE TYPE STATE CONNECTION enp7s0 ethernet connected bridge0-port1 enp8s0 ethernet connected bridge0-port2",
"nmcli connection modify bridge0 connection.autoconnect-slaves 1",
"nmcli connection up bridge0",
"ip link show master bridge0 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff",
"bridge link show 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100 5: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state forwarding priority 32 cost 100 6: enp11s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state blocking priority 32 cost 100",
"nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet unavailable -- enp8s0 ethernet unavailable --",
"nmtui",
"ip link show master bridge0 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff",
"bridge link show 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100",
"nm-connection-editor",
"ip link show master bridge0 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff",
"bridge link show 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100 5: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state forwarding priority 32 cost 100 6: enp11s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state blocking priority 32 cost 100",
"--- interfaces: - name: bridge0 type: linux-bridge state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false bridge: options: stp: enabled: true port: - name: enp1s0 - name: enp7s0 - name: enp1s0 type: ethernet state: up - name: enp7s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: bridge0 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: bridge0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb",
"nmstatectl apply ~/create-bridge.yml",
"nmcli device status DEVICE TYPE STATE CONNECTION bridge0 bridge connected bridge0",
"nmcli connection show bridge0 connection.id: bridge0_ connection.uuid: e2cc9206-75a2-4622-89cf-1252926060a9 connection.stable-id: -- connection.type: bridge connection.interface-name: bridge0",
"nmstatectl show bridge0",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bridge connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bridge profile - name: bridge0 type: bridge interface_name: bridge0 ip: dhcp4: yes auto6: yes state: up # Port profile for the 1st Ethernet device - name: bridge0-port1 interface_name: enp7s0 type: ethernet controller: bridge0 port_type: bridge state: up # Port profile for the 2nd Ethernet device - name: bridge0-port2 interface_name: enp8s0 type: ethernet controller: bridge0 port_type: bridge state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'ip link show master bridge0' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff",
"ansible managed-node-01.example.com -m command -a 'bridge link show' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/configuring-a-network-bridge_configuring-and-managing-networking |
3.4. Adding a Custom Boot Message | 3.4. Adding a Custom Boot Message Optionally, modify /tftpboot/linux-install/msgs/boot.msg to use a custom boot message. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/pxe_network_installations-adding_a_custom_boot_message |
Chapter 1. Preparing to install on a single node | Chapter 1. Preparing to install on a single node 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have read the documentation on selecting a cluster installation method and preparing it for users . 1.2. About OpenShift on a single node You can create a single-node cluster with standard installation methods. OpenShift Container Platform on a single node is a specialized installation that requires the creation of a special Ignition configuration file. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability. Important The use of OpenShiftSDN with single-node OpenShift is not supported. OVN-Kubernetes is the default network plugin for single-node OpenShift deployments. 1.3. Requirements for installing OpenShift on a single node Installing OpenShift Container Platform on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the following requirements: Administration host: You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation. Note For the ppc64le platform, the host should prepare the ISO, but does not need to create the USB boot drive. The ISO can be mounted to PowerVM directly. Note ISO is not required for IBM Z(R) installations. CPU Architecture: Installing OpenShift Container Platform on a single node supports x86_64 , arm64 , ppc64le , and s390x CPU architectures. Supported platforms: Installing OpenShift Container Platform on a single node is supported on bare metal and Certified third-party hypervisors . In most cases, you must specify the platform.none: {} parameter in the install-config.yaml configuration file. The following list shows the only exceptions and the corresponding parameter to specify in the install-config.yaml configuration file: Amazon Web Services (AWS), where you use platform=aws Google Cloud Platform (GCP), where you use platform=gcp Microsoft Azure, where you use platform=azure Production-grade server: Installing OpenShift Container Platform on a single node requires a server with sufficient resources to run OpenShift Container Platform services and a production workload. Table 1.1. Minimum resource requirements Profile vCPU Memory Storage Minimum 8 vCPUs 16 GB of RAM 120 GB Note One vCPU equals one physical core. However, if you enable simultaneous multithreading (SMT), or Hyper-Threading, use the following formula to calculate the number of vCPUs that represent one physical core: (threads per core x cores) x sockets = vCPUs Adding Operators during the installation process might increase the minimum resource requirements. The server must have a Baseboard Management Controller (BMC) when booting with virtual media. Note BMC is not supported on IBM Z(R) and IBM Power(R). Networking: The server must have access to the internet or access to a local registry if it is not connected to a routable network. The server must have a DHCP reservation or a static IP address for the Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN): Table 1.2. Required DNS records Usage FQDN Description Kubernetes API api.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record. This record must be resolvable by both clients external to the cluster and within the cluster. Internal API api-int.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster. Ingress route *.apps.<cluster_name>.<base_domain> Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by both clients external to the cluster and within the cluster. Important Without persistent IP addresses, communications between the apiserver and etcd might fail. | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_a_single_node/preparing-to-install-sno |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/getting_started_with_red_hat_build_of_apache_camel_for_spring_boot/pr01 |
Chapter 8. Windows node upgrades | Chapter 8. Windows node upgrades You can ensure your Windows nodes have the latest updates by upgrading the Windows Machine Config Operator (WMCO). 8.1. Windows Machine Config Operator upgrades When a new version of the Windows Machine Config Operator (WMCO) is released that is compatible with the current cluster version, the Operator is upgraded based on the upgrade channel and subscription approval strategy it was installed with when using the Operator Lifecycle Manager (OLM). The WMCO upgrade results in the Kubernetes components in the Windows machine being upgraded. Note If you are upgrading to a new version of the WMCO and want to use cluster monitoring, you must have the openshift.io/cluster-monitoring=true label present in the WMCO namespace. If you add the label to a pre-existing WMCO namespace, and there are already Windows nodes configured, restart the WMCO pod to allow monitoring graphs to display. For a non-disruptive upgrade, the WMCO terminates the Windows machines configured by the version of the WMCO and recreates them using the current version. This is done by deleting the Machine object, which results in the drain and deletion of the Windows node. To facilitate an upgrade, the WMCO adds a version annotation to all the configured nodes. During an upgrade, a mismatch in version annotation results in the deletion and recreation of a Windows machine. To have minimal service disruptions during an upgrade, the WMCO only updates one Windows machine at a time. After the update, it is recommended that you set the spec.os.name.windows parameter in your workload pods. The WMCO uses this field to authoritatively identify the pod operating system for validation and is used to enforce Windows-specific pod security context constraints (SCCs). Important The WMCO is only responsible for updating Kubernetes components, not for Windows operating system updates. You provide the Windows image when creating the VMs; therefore, you are responsible for providing an updated image. You can provide an updated Windows image by changing the image configuration in the MachineSet spec. For more information on Operator upgrades using the Operator Lifecycle Manager (OLM), see Updating installed Operators . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/windows_container_support_for_openshift/windows-node-upgrades |
Appendix B. Event Codes | Appendix B. Event Codes This table lists all event codes. Table B.1. Event codes Code Name Severity Message 0 UNASSIGNED Info 1 VDC_START Info Starting oVirt Engine. 2 VDC_STOP Info Stopping oVirt Engine. 12 VDS_FAILURE Error Host USD{VdsName} is non responsive. 13 VDS_DETECTED Info Status of host USD{VdsName} was set to USD{HostStatus}. 14 VDS_RECOVER Info Host USD{VdsName} is rebooting. 15 VDS_MAINTENANCE Normal Host USD{VdsName} was switched to Maintenance Mode. 16 VDS_ACTIVATE Info Activation of host USD{VdsName} initiated by USD{UserName}. 17 VDS_MAINTENANCE_FAILED Error Failed to switch Host USD{VdsName} to Maintenance mode. 18 VDS_ACTIVATE_FAILED Error Failed to activate Host USD{VdsName}.(User: USD{UserName}). 19 VDS_RECOVER_FAILED Error Host USD{VdsName} failed to recover. 20 USER_VDS_START Info Host USD{VdsName} was started by USD{UserName}. 21 USER_VDS_STOP Info Host USD{VdsName} was stopped by USD{UserName}. 22 IRS_FAILURE Error Failed to access Storage on Host USD{VdsName}. 23 VDS_LOW_DISK_SPACE Warning Warning, Low disk space. Host USD{VdsName} has less than USD{DiskSpace} MB of free space left on: USD{Disks}. 24 VDS_LOW_DISK_SPACE_ERROR Error Critical, Low disk space. Host USD{VdsName} has less than USD{DiskSpace} MB of free space left on: USD{Disks}. Low disk space might cause an issue upgrading this host. 25 VDS_NO_SELINUX_ENFORCEMENT Warning Host USD{VdsName} does not enforce SELinux. Current status: USD{Mode} 26 IRS_DISK_SPACE_LOW Warning Warning, Low disk space. USD{StorageDomainName} domain has USD{DiskSpace} GB of free space. 27 VDS_STATUS_CHANGE_FAILED_DUE_TO_STOP_SPM_FAILURE Warning Failed to change status of host USD{VdsName} due to a failure to stop the spm. 28 VDS_PROVISION Warning Installing OS on Host USD{VdsName} using Hostgroup USD{HostGroupName}. 29 USER_ADD_VM_TEMPLATE_SUCCESS Info Template USD{VmTemplateName} was created successfully. 31 USER_VDC_LOGOUT Info User USD{UserName} connected from 'USD{SourceIP}' using session 'USD{SessionID}' logged out. 32 USER_RUN_VM Info VM USD{VmName} started on Host USD{VdsName} 33 USER_STOP_VM Info VM USD{VmName} powered off by USD{UserName} (Host: USD{VdsName})USD{OptionalReason}. 34 USER_ADD_VM Info VM USD{VmName} was created by USD{UserName}. 35 USER_UPDATE_VM Info VM USD{VmName} configuration was updated by USD{UserName}. 36 USER_ADD_VM_TEMPLATE_FAILURE Error Failed creating Template USD{VmTemplateName}. 37 USER_ADD_VM_STARTED Info VM USD{VmName} creation was initiated by USD{UserName}. 38 USER_CHANGE_DISK_VM Info CD USD{DiskName} was inserted to VM USD{VmName} by USD{UserName}. 39 USER_PAUSE_VM Info VM USD{VmName} was suspended by USD{UserName} (Host: USD{VdsName}). 40 USER_RESUME_VM Info VM USD{VmName} was resumed by USD{UserName} (Host: USD{VdsName}). 41 USER_VDS_RESTART Info Host USD{VdsName} was restarted by USD{UserName}. 42 USER_ADD_VDS Info Host USD{VdsName} was added by USD{UserName}. 43 USER_UPDATE_VDS Info Host USD{VdsName} configuration was updated by USD{UserName}. 44 USER_REMOVE_VDS Info Host USD{VdsName} was removed by USD{UserName}. 45 USER_CREATE_SNAPSHOT Info Snapshot 'USD{SnapshotName}' creation for VM 'USD{VmName}' was initiated by USD{UserName}. 46 USER_TRY_BACK_TO_SNAPSHOT Info Snapshot-Preview USD{SnapshotName} for VM USD{VmName} was initiated by USD{UserName}. 47 USER_RESTORE_FROM_SNAPSHOT Info VM USD{VmName} restored from Snapshot by USD{UserName}. 48 USER_ADD_VM_TEMPLATE Info Creation of Template USD{VmTemplateName} from VM USD{VmName} was initiated by USD{UserName}. 49 USER_UPDATE_VM_TEMPLATE Info Template USD{VmTemplateName} configuration was updated by USD{UserName}. 50 USER_REMOVE_VM_TEMPLATE Info Removal of Template USD{VmTemplateName} was initiated by USD{UserName}. 51 USER_ADD_VM_TEMPLATE_FINISHED_SUCCESS Info Creation of Template USD{VmTemplateName} from VM USD{VmName} has been completed. 52 USER_ADD_VM_TEMPLATE_FINISHED_FAILURE Error Failed to complete creation of Template USD{VmTemplateName} from VM USD{VmName}. 53 USER_ADD_VM_FINISHED_SUCCESS Info VM USD{VmName} creation has been completed. 54 USER_FAILED_RUN_VM Error Failed to run VM USD{VmName}USD{DueToError} (User: USD{UserName}). 55 USER_FAILED_PAUSE_VM Error Failed to suspend VM USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 56 USER_FAILED_STOP_VM Error Failed to power off VM USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 57 USER_FAILED_ADD_VM Error Failed to create VM USD{VmName} (User: USD{UserName}). 58 USER_FAILED_UPDATE_VM Error Failed to update VM USD{VmName} (User: USD{UserName}). 59 USER_FAILED_REMOVE_VM Error 60 USER_ADD_VM_FINISHED_FAILURE Error Failed to complete VM USD{VmName} creation. 61 VM_DOWN Info VM USD{VmName} is down. USD{ExitMessage} 62 VM_MIGRATION_START Info Migration started (VM: USD{VmName}, Source: USD{VdsName}, Destination: USD{DestinationVdsName}, User: USD{UserName}). USD{OptionalReason} 63 VM_MIGRATION_DONE Info Migration completed (VM: USD{VmName}, Source: USD{VdsName}, Destination: USD{DestinationVdsName}, Duration: USD{Duration}, Total: USD{TotalDuration}, Actual downtime: USD{ActualDowntime}) 64 VM_MIGRATION_ABORT Error Migration failed: USD{MigrationError} (VM: USD{VmName}, Source: USD{VdsName}). 65 VM_MIGRATION_FAILED Error Migration failedUSD{DueToMigrationError} (VM: USD{VmName}, Source: USD{VdsName}). 66 VM_FAILURE Error VM USD{VmName} cannot be found on Host USD{VdsName}. 67 VM_MIGRATION_START_SYSTEM_INITIATED Info Migration initiated by system (VM: USD{VmName}, Source: USD{VdsName}, Destination: USD{DestinationVdsName}, Reason: USD{OptionalReason}). 68 USER_CREATE_SNAPSHOT_FINISHED_SUCCESS Info Snapshot 'USD{SnapshotName}' creation for VM 'USD{VmName}' has been completed. 69 USER_CREATE_SNAPSHOT_FINISHED_FAILURE Error Failed to complete snapshot 'USD{SnapshotName}' creation for VM 'USD{VmName}'. 70 USER_RUN_VM_AS_STATELESS_FINISHED_FAILURE Error Failed to complete starting of VM USD{VmName}. 71 USER_TRY_BACK_TO_SNAPSHOT_FINISH_SUCCESS Info Snapshot-Preview USD{SnapshotName} for VM USD{VmName} has been completed. 72 MERGE_SNAPSHOTS_ON_HOST Info Merging snapshots (USD{SourceSnapshot} into USD{DestinationSnapshot}) of disk USD{Disk} on host USD{VDS} 73 USER_INITIATED_SHUTDOWN_VM Info VM shutdown initiated by USD{UserName} on VM USD{VmName} (Host: USD{VdsName})USD{OptionalReason}. 74 USER_FAILED_SHUTDOWN_VM Error Failed to initiate shutdown on VM USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 75 VDS_SOFT_RECOVER Info Soft fencing on host USD{VdsName} was successful. 76 USER_STOPPED_VM_INSTEAD_OF_SHUTDOWN Info VM USD{VmName} was powered off ungracefully by USD{UserName} (Host: USD{VdsName})USD{OptionalReason}. 77 USER_FAILED_STOPPING_VM_INSTEAD_OF_SHUTDOWN Error Failed to power off VM USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 78 USER_ADD_DISK_TO_VM Info Add-Disk operation of USD{DiskAlias} was initiated on VM USD{VmName} by USD{UserName}. 79 USER_FAILED_ADD_DISK_TO_VM Error Add-Disk operation failed on VM USD{VmName} (User: USD{UserName}). 80 USER_REMOVE_DISK_FROM_VM Info Disk was removed from VM USD{VmName} by USD{UserName}. 81 USER_FAILED_REMOVE_DISK_FROM_VM Error Failed to remove Disk from VM USD{VmName} (User: USD{UserName}). 88 USER_UPDATE_VM_DISK Info VM USD{VmName} USD{DiskAlias} disk was updated by USD{UserName}. 89 USER_FAILED_UPDATE_VM_DISK Error Failed to update VM USD{VmName} disk USD{DiskAlias} (User: USD{UserName}). 90 VDS_FAILED_TO_GET_HOST_HARDWARE_INFO Warning Could not get hardware information for host USD{VdsName} 94 USER_COMMIT_RESTORE_FROM_SNAPSHOT_START Info Committing a Snapshot-Preview for VM USD{VmName} was initialized by USD{UserName}. 95 USER_COMMIT_RESTORE_FROM_SNAPSHOT_FINISH_SUCCESS Info Committing a Snapshot-Preview for VM USD{VmName} has been completed. 96 USER_COMMIT_RESTORE_FROM_SNAPSHOT_FINISH_FAILURE Error Failed to commit Snapshot-Preview for VM USD{VmName}. 97 USER_ADD_DISK_TO_VM_FINISHED_SUCCESS Info The disk USD{DiskAlias} was successfully added to VM USD{VmName}. 98 USER_ADD_DISK_TO_VM_FINISHED_FAILURE Error Add-Disk operation failed to complete on VM USD{VmName}. 99 USER_TRY_BACK_TO_SNAPSHOT_FINISH_FAILURE Error Failed to complete Snapshot-Preview USD{SnapshotName} for VM USD{VmName}. 100 USER_RESTORE_FROM_SNAPSHOT_FINISH_SUCCESS Info VM USD{VmName} restoring from Snapshot has been completed. 101 USER_RESTORE_FROM_SNAPSHOT_FINISH_FAILURE Error Failed to complete restoring from Snapshot of VM USD{VmName}. 102 USER_FAILED_CHANGE_DISK_VM Error Failed to change disk in VM USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 103 USER_FAILED_RESUME_VM Error Failed to resume VM USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 104 USER_FAILED_ADD_VDS Error Failed to add Host USD{VdsName} (User: USD{UserName}). 105 USER_FAILED_UPDATE_VDS Error Failed to update Host USD{VdsName} (User: USD{UserName}). 106 USER_FAILED_REMOVE_VDS Error Failed to remove Host USD{VdsName} (User: USD{UserName}). 107 USER_FAILED_VDS_RESTART Error Failed to restart Host USD{VdsName}, (User: USD{UserName}). 108 USER_FAILED_ADD_VM_TEMPLATE Error Failed to initiate creation of Template USD{VmTemplateName} from VM USD{VmName} (User: USD{UserName}). 109 USER_FAILED_UPDATE_VM_TEMPLATE Error Failed to update Template USD{VmTemplateName} (User: USD{UserName}). 110 USER_FAILED_REMOVE_VM_TEMPLATE Error Failed to initiate removal of Template USD{VmTemplateName} (User: USD{UserName}). 111 USER_STOP_SUSPENDED_VM Info Suspended VM USD{VmName} has had its save state cleared by USD{UserName}USD{OptionalReason}. 112 USER_STOP_SUSPENDED_VM_FAILED Error Failed to power off suspended VM USD{VmName} (User: USD{UserName}). 113 USER_REMOVE_VM_FINISHED Info VM USD{VmName} was successfully removed. 115 USER_FAILED_TRY_BACK_TO_SNAPSHOT Error Failed to preview Snapshot USD{SnapshotName} for VM USD{VmName} (User: USD{UserName}). 116 USER_FAILED_RESTORE_FROM_SNAPSHOT Error Failed to restore VM USD{VmName} from Snapshot (User: USD{UserName}). 117 USER_FAILED_CREATE_SNAPSHOT Error Failed to create Snapshot USD{SnapshotName} for VM USD{VmName} (User: USD{UserName}). 118 USER_FAILED_VDS_START Error Failed to start Host USD{VdsName}, (User: USD{UserName}). 119 VM_DOWN_ERROR Error VM USD{VmName} is down with error. USD{ExitMessage}. 120 VM_MIGRATION_TO_SERVER_FAILED Error Migration failedUSD{DueToMigrationError} (VM: USD{VmName}, Source: USD{VdsName}, Destination: USD{DestinationVdsName}). 121 SYSTEM_VDS_RESTART Info Host USD{VdsName} was restarted by the engine. 122 SYSTEM_FAILED_VDS_RESTART Error A restart initiated by the engine to Host USD{VdsName} has failed. 123 VDS_SLOW_STORAGE_RESPONSE_TIME Warning Slow storage response time on Host USD{VdsName}. 124 VM_IMPORT Info Started VM import of USD{ImportedVmName} (User: USD{UserName}) 125 VM_IMPORT_FAILED Error Failed to import VM USD{ImportedVmName} (User: USD{UserName}) 126 VM_NOT_RESPONDING Warning VM USD{VmName} is not responding. 127 VDS_RUN_IN_NO_KVM_MODE Error Host USD{VdsName} running without virtualization hardware acceleration 128 VM_MIGRATION_TRYING_RERUN Warning Failed to migrate VM USD{VmName} to Host USD{DestinationVdsName}USD{DueToMigrationError}. Trying to migrate to another Host. 129 VM_CLEARED Info Unused 130 USER_SUSPEND_VM_FINISH_FAILURE_WILL_TRY_AGAIN Error Failed to complete suspending of VM USD{VmName}, will try again. 131 USER_EXPORT_VM Info VM USD{VmName} exported to USD{ExportPath} by USD{UserName} 132 USER_EXPORT_VM_FAILED Error Failed to export VM USD{VmName} to USD{ExportPath} (User: USD{UserName}) 133 USER_EXPORT_TEMPLATE Info Template USD{VmTemplateName} exported to USD{ExportPath} by USD{UserName} 134 USER_EXPORT_TEMPLATE_FAILED Error Failed to export Template USD{VmTemplateName} to USD{ExportPath} (User: USD{UserName}) 135 TEMPLATE_IMPORT Info Started Template import of USD{ImportedVmTemplateName} (User: USD{UserName}) 136 TEMPLATE_IMPORT_FAILED Error Failed to import Template USD{ImportedVmTemplateName} (User: USD{UserName}) 137 USER_FAILED_VDS_STOP Error Failed to stop Host USD{VdsName}, (User: USD{UserName}). 138 VM_PAUSED_ENOSPC Error VM USD{VmName} has been paused due to no Storage space error. 139 VM_PAUSED_ERROR Error VM USD{VmName} has been paused due to unknown storage error. 140 VM_MIGRATION_FAILED_DURING_MOVE_TO_MAINTENANCE Error Migration failedUSD{DueToMigrationError} while Host is in 'preparing for maintenance' state.\n Consider manual intervention\: stopping/migrating Vms as Host's state will not\n turn to maintenance while VMs are still running on it.(VM: USD{VmName}, Source: USD{VdsName}, Destination: USD{DestinationVdsName}). 141 VDS_VERSION_NOT_SUPPORTED_FOR_CLUSTER Error Host USD{VdsName} is installed with VDSM version (USD{VdsSupportedVersions}) and cannot join cluster USD{ClusterName} which is compatible with VDSM versions USD{CompatibilityVersion}. 142 VM_SET_TO_UNKNOWN_STATUS Warning VM USD{VmName} was set to the Unknown status. 143 VM_WAS_SET_DOWN_DUE_TO_HOST_REBOOT_OR_MANUAL_FENCE Info Vm USD{VmName} was shut down due to USD{VdsName} host reboot or manual fence 144 VM_IMPORT_INFO Info Value of field USD{FieldName} of imported VM USD{VmName} is USD{FieldValue}. The field is reset to the default value 145 VM_PAUSED_EIO Error VM USD{VmName} has been paused due to storage I/O problem. 146 VM_PAUSED_EPERM Error VM USD{VmName} has been paused due to storage permissions problem. 147 VM_POWER_DOWN_FAILED Warning Shutdown of VM USD{VmName} failed. 148 VM_MEMORY_UNDER_GUARANTEED_VALUE Error VM USD{VmName} on host USD{VdsName} was guaranteed USD{MemGuaranteed} MB but currently has USD{MemActual} MB 149 USER_ADD Info User 'USD{NewUserName}' was added successfully to the system. 150 USER_INITIATED_RUN_VM Info Starting VM USD{VmName} was initiated by USD{UserName}. 151 USER_INITIATED_RUN_VM_FAILED Warning Failed to run VM USD{VmName} on Host USD{VdsName}. 152 USER_RUN_VM_ON_NON_DEFAULT_VDS Warning Guest USD{VmName} started on Host USD{VdsName}. (Default Host parameter was ignored - assigned Host was not available). 153 USER_STARTED_VM Info VM USD{VmName} was started by USD{UserName} (Host: USD{VdsName}). 154 VDS_CLUSTER_VERSION_NOT_SUPPORTED Error Host USD{VdsName} is compatible with versions (USD{VdsSupportedVersions}) and cannot join Cluster USD{ClusterName} which is set to version USD{CompatibilityVersion}. 155 VDS_ARCHITECTURE_NOT_SUPPORTED_FOR_CLUSTER Error Host USD{VdsName} has architecture USD{VdsArchitecture} and cannot join Cluster USD{ClusterName} which has architecture USD{ClusterArchitecture}. 156 CPU_TYPE_UNSUPPORTED_IN_THIS_CLUSTER_VERSION Error Host USD{VdsName} moved to Non-Operational state as host CPU type is not supported in this cluster compatibility version or is not supported at all 157 USER_REBOOT_VM Info User USD{UserName} initiated reboot of VM USD{VmName}. 158 USER_FAILED_REBOOT_VM Error Failed to reboot VM USD{VmName} (User: USD{UserName}). 159 USER_FORCE_SELECTED_SPM Info Host USD{VdsName} was force selected by USD{UserName} 160 USER_ACCOUNT_DISABLED_OR_LOCKED Error User USD{UserName} cannot login, as it got disabled or locked. Please contact the system administrator. 161 VM_CANCEL_MIGRATION Info Migration cancelled (VM: USD{VmName}, Source: USD{VdsName}, User: USD{UserName}). 162 VM_CANCEL_MIGRATION_FAILED Error Failed to cancel migration for VM: USD{VmName} 163 VM_STATUS_RESTORED Info VM USD{VmName} status was restored to USD{VmStatus}. 164 VM_SET_TICKET Info User USD{UserName} initiated console session for VM USD{VmName} 165 VM_SET_TICKET_FAILED Error User USD{UserName} failed to initiate a console session for VM USD{VmName} 166 VM_MIGRATION_NO_VDS_TO_MIGRATE_TO Warning No available host was found to migrate VM USD{VmName} to. 167 VM_CONSOLE_CONNECTED Info User USD{UserName} is connected to VM USD{VmName}. 168 VM_CONSOLE_DISCONNECTED Info User USD{UserName} got disconnected from VM USD{VmName}. 169 VM_FAILED_TO_PRESTART_IN_POOL Warning Cannot pre-start VM in pool 'USD{VmPoolName}'. The system will continue trying. 170 USER_CREATE_LIVE_SNAPSHOT_FINISHED_FAILURE Warning Failed to create live snapshot 'USD{SnapshotName}' for VM 'USD{VmName}'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency. 171 USER_RUN_VM_AS_STATELESS_WITH_DISKS_NOT_ALLOWING_SNAPSHOT Warning VM USD{VmName} was run as stateless with one or more of disks that do not allow snapshots (User:USD{UserName}). 172 USER_REMOVE_VM_FINISHED_WITH_ILLEGAL_DISKS Warning VM USD{VmName} has been removed, but the following disks could not be removed: USD{DisksNames}. These disks will appear in the main disks tab in illegal state, please remove manually when possible. 173 USER_CREATE_LIVE_SNAPSHOT_NO_MEMORY_FAILURE Error Failed to save memory as part of Snapshot USD{SnapshotName} for VM USD{VmName} (User: USD{UserName}). 174 VM_IMPORT_FROM_CONFIGURATION_EXECUTED_SUCCESSFULLY Info VM USD{VmName} has been successfully imported from the given configuration. 175 VM_IMPORT_FROM_CONFIGURATION_ATTACH_DISKS_FAILED Warning VM USD{VmName} has been imported from the given configuration but the following disk(s) failed to attach: USD{DiskAliases}. 176 VM_BALLOON_DRIVER_ERROR Error The Balloon driver on VM USD{VmName} on host USD{VdsName} is requested but unavailable. 177 VM_BALLOON_DRIVER_UNCONTROLLED Error The Balloon device on VM USD{VmName} on host USD{VdsName} is inflated but the device cannot be controlled (guest agent is down). 178 VM_MEMORY_NOT_IN_RECOMMENDED_RANGE Warning VM USD{VmName} was configured with USD{VmMemInMb}MiB of memory while the recommended value range is USD{VmMinMemInMb}MiB - USD{VmMaxMemInMb}MiB 179 USER_INITIATED_RUN_VM_AND_PAUSE Info Starting in paused mode VM USD{VmName} was initiated by USD{UserName}. 180 TEMPLATE_IMPORT_FROM_CONFIGURATION_SUCCESS Info Template USD{VmTemplateName} has been successfully imported from the given configuration. 181 TEMPLATE_IMPORT_FROM_CONFIGURATION_FAILED Error Failed to import Template USD{VmTemplateName} from the given configuration. 182 USER_FAILED_ATTACH_USER_TO_VM Error Failed to attach User USD{AdUserName} to VM USD{VmName} (User: USD{UserName}). 183 USER_ATTACH_TAG_TO_TEMPLATE Info Tag USD{TagName} was attached to Templates(s) USD{TemplatesNames} by USD{UserName}. 184 USER_ATTACH_TAG_TO_TEMPLATE_FAILED Error Failed to attach Tag USD{TagName} to Templates(s) USD{TemplatesNames} (User: USD{UserName}). 185 USER_DETACH_TEMPLATE_FROM_TAG Info Tag USD{TagName} was detached from Template(s) USD{TemplatesNames} by USD{UserName}. 186 USER_DETACH_TEMPLATE_FROM_TAG_FAILED Error Failed to detach Tag USD{TagName} from TEMPLATE(s) USD{TemplatesNames} (User: USD{UserName}). 187 VDS_STORAGE_CONNECTION_FAILED_BUT_LAST_VDS Error Failed to connect Host USD{VdsName} to Data Center, due to connectivity errors with the Storage. Host USD{VdsName} will remain in Up state (but inactive), as it is the last Host in the Data Center, to enable manual intervention by the Administrator. 188 VDS_STORAGES_CONNECTION_FAILED Error Failed to connect Host USD{VdsName} to the Storage Domains USD{failedStorageDomains}. 189 VDS_STORAGE_VDS_STATS_FAILED Error Host USD{VdsName} reports about one of the Active Storage Domains as Problematic. 190 UPDATE_OVF_FOR_STORAGE_DOMAIN_FAILED Warning Failed to update VMs/Templates OVF data for Storage Domain USD{StorageDomainName} in Data Center USD{StoragePoolName}. 191 CREATE_OVF_STORE_FOR_STORAGE_DOMAIN_FAILED Warning Failed to create OVF store disk for Storage Domain USD{StorageDomainName}.\n The Disk with the id USD{DiskId} might be removed manually for automatic attempt to create new one. \n OVF updates won't be attempted on the created disk. 192 CREATE_OVF_STORE_FOR_STORAGE_DOMAIN_INITIATE_FAILED Warning Failed to create OVF store disk for Storage Domain USD{StorageDomainName}. \n OVF data won't be updated meanwhile for that domain. 193 DELETE_OVF_STORE_FOR_STORAGE_DOMAIN_FAILED Warning Failed to delete the OVF store disk for Storage Domain USD{StorageDomainName}.\n In order to detach the domain please remove it manually or try to detach the domain again for another attempt. 194 VM_CANCEL_CONVERSION Info Conversion cancelled (VM: USD{VmName}, Source: USD{VdsName}, User: USD{UserName}). 195 VM_CANCEL_CONVERSION_FAILED Error Failed to cancel conversion for VM: USD{VmName} 196 VM_RECOVERED_FROM_PAUSE_ERROR Normal VM USD{VmName} has recovered from paused back to up. 197 SYSTEM_SSH_HOST_RESTART Info Host USD{VdsName} was restarted using SSH by the engine. 198 SYSTEM_FAILED_SSH_HOST_RESTART Error A restart using SSH initiated by the engine to Host USD{VdsName} has failed. 199 USER_UPDATE_OVF_STORE Info OVF_STORE for domain USD{StorageDomainName} was updated by USD{UserName}. 200 IMPORTEXPORT_GET_VMS_INFO_FAILED Error Failed to retrieve VM/Templates information from export domain USD{StorageDomainName} 201 IRS_DISK_SPACE_LOW_ERROR Error Critical, Low disk space. USD{StorageDomainName} domain has USD{DiskSpace} GB of free space. 202 IMPORTEXPORT_GET_EXTERNAL_VMS_INFO_FAILED Error Failed to retrieve VMs information from external server USD{URL} 204 IRS_HOSTED_ON_VDS Info Storage Pool Manager runs on Host USD{VdsName} (Address: USD{ServerIp}), Data Center USD{StoragePoolName}. 205 PROVIDER_ADDED Info Provider USD{ProviderName} was added. (User: USD{UserName}) 206 PROVIDER_ADDITION_FAILED Error Failed to add provider USD{ProviderName}. (User: USD{UserName}) 207 PROVIDER_UPDATED Info Provider USD{ProviderName} was updated. (User: USD{UserName}) 208 PROVIDER_UPDATE_FAILED Error Failed to update provider USD{ProviderName}. (User: USD{UserName}) 209 PROVIDER_REMOVED Info Provider USD{ProviderName} was removed. (User: USD{UserName}) 210 PROVIDER_REMOVAL_FAILED Error Failed to remove provider USD{ProviderName}. (User: USD{UserName}) 213 PROVIDER_CERTIFICATE_IMPORTED Info Certificate for provider USD{ProviderName} was imported. (User: USD{UserName}) 214 PROVIDER_CERTIFICATE_IMPORT_FAILED Error Failed importing Certificate for provider USD{ProviderName}. (User: USD{UserName}) 215 PROVIDER_SYNCHRONIZED Info 216 PROVIDER_SYNCHRONIZED_FAILED Error Failed to synchronize networks of Provider USD{ProviderName}. 217 PROVIDER_SYNCHRONIZED_PERFORMED Info Networks of Provider USD{ProviderName} were successfully synchronized. 218 PROVIDER_SYNCHRONIZED_PERFORMED_FAILED Error Networks of Provider USD{ProviderName} were incompletely synchronized. 219 PROVIDER_SYNCHRONIZED_DISABLED Error Failed to synchronize networks of Provider USD{ProviderName}, because the authentication information of the provider is invalid. Automatic synchronization is deactivated for this Provider. 250 USER_UPDATE_VM_CLUSTER_DEFAULT_HOST_CLEARED Info USD{VmName} cluster was updated by USD{UserName}, Default host was reset to auto assign. 251 USER_REMOVE_VM_TEMPLATE_FINISHED Info Removal of Template USD{VmTemplateName} has been completed. 252 SYSTEM_FAILED_UPDATE_VM Error Failed to Update VM USD{VmName} that was initiated by system. 253 SYSTEM_UPDATE_VM Info VM USD{VmName} configuration was updated by system. 254 VM_ALREADY_IN_REQUESTED_STATUS Info VM USD{VmName} is already USD{VmStatus}, USD{Action} was skipped. User: USD{UserName}. 302 USER_ADD_VM_POOL_WITH_VMS Info VM Pool USD{VmPoolName} (containing USD{VmsCount} VMs) was created by USD{UserName}. 303 USER_ADD_VM_POOL_WITH_VMS_FAILED Error Failed to create VM Pool USD{VmPoolName} (User: USD{UserName}). 304 USER_REMOVE_VM_POOL Info VM Pool USD{VmPoolName} was removed by USD{UserName}. 305 USER_REMOVE_VM_POOL_FAILED Error Failed to remove VM Pool USD{VmPoolName} (User: USD{UserName}). 306 USER_ADD_VM_TO_POOL Info VM USD{VmName} was added to VM Pool USD{VmPoolName} by USD{UserName}. 307 USER_ADD_VM_TO_POOL_FAILED Error Failed to add VM USD{VmName} to VM Pool USD{VmPoolName}(User: USD{UserName}). 308 USER_REMOVE_VM_FROM_POOL Info VM USD{VmName} was removed from VM Pool USD{VmPoolName} by USD{UserName}. 309 USER_REMOVE_VM_FROM_POOL_FAILED Error Failed to remove VM USD{VmName} from VM Pool USD{VmPoolName} (User: USD{UserName}). 310 USER_ATTACH_USER_TO_POOL Info User USD{AdUserName} was attached to VM Pool USD{VmPoolName} by USD{UserName}. 311 USER_ATTACH_USER_TO_POOL_FAILED Error Failed to attach User USD{AdUserName} to VM Pool USD{VmPoolName} (User: USD{UserName}). 312 USER_DETACH_USER_FROM_POOL Info User USD{AdUserName} was detached from VM Pool USD{VmPoolName} by USD{UserName}. 313 USER_DETACH_USER_FROM_POOL_FAILED Error Failed to detach User USD{AdUserName} from VM Pool USD{VmPoolName} (User: USD{UserName}). 314 USER_UPDATE_VM_POOL Info VM Pool USD{VmPoolName} configuration was updated by USD{UserName}. 315 USER_UPDATE_VM_POOL_FAILED Error Failed to update VM Pool USD{VmPoolName} configuration (User: USD{UserName}). 316 USER_ATTACH_USER_TO_VM_FROM_POOL Info Attaching User USD{AdUserName} to VM USD{VmName} in VM Pool USD{VmPoolName} was initiated by USD{UserName}. 317 USER_ATTACH_USER_TO_VM_FROM_POOL_FAILED Error Failed to attach User USD{AdUserName} to VM from VM Pool USD{VmPoolName} (User: USD{UserName}). 318 USER_ATTACH_USER_TO_VM_FROM_POOL_FINISHED_SUCCESS Info User USD{AdUserName} successfully attached to VM USD{VmName} in VM Pool USD{VmPoolName}. 319 USER_ATTACH_USER_TO_VM_FROM_POOL_FINISHED_FAILURE Error Failed to attach user USD{AdUserName} to VM USD{VmName} in VM Pool USD{VmPoolName}. 320 USER_ADD_VM_POOL_WITH_VMS_ADD_VDS_FAILED Error Pool USD{VmPoolName} Created, but some Vms failed to create (User: USD{UserName}). 321 USER_REMOVE_VM_POOL_INITIATED Info VM Pool USD{VmPoolName} removal was initiated by USD{UserName}. 325 USER_REMOVE_ADUSER Info User USD{AdUserName} was removed by USD{UserName}. 326 USER_FAILED_REMOVE_ADUSER Error Failed to remove User USD{AdUserName} (User: USD{UserName}). 327 USER_FAILED_ADD_ADUSER Warning Failed to add User 'USD{NewUserName}' to the system. 342 USER_REMOVE_SNAPSHOT Info Snapshot 'USD{SnapshotName}' deletion for VM 'USD{VmName}' was initiated by USD{UserName}. 343 USER_FAILED_REMOVE_SNAPSHOT Error Failed to remove Snapshot USD{SnapshotName} for VM USD{VmName} (User: USD{UserName}). 344 USER_UPDATE_VM_POOL_WITH_VMS Info VM Pool USD{VmPoolName} was updated by USD{UserName}, USD{VmsCount} VMs were added. 345 USER_UPDATE_VM_POOL_WITH_VMS_FAILED Error Failed to update VM Pool USD{VmPoolName}(User: USD{UserName}). 346 USER_PASSWORD_CHANGED Info Password changed successfully for USD{UserName} 347 USER_PASSWORD_CHANGE_FAILED Error Failed to change password. (User: USD{UserName}) 348 USER_CLEAR_UNKNOWN_VMS Info All VMs' status on Non Responsive Host USD{VdsName} were changed to 'Down' by USD{UserName} 349 USER_FAILED_CLEAR_UNKNOWN_VMS Error Failed to clear VMs' status on Non Responsive Host USD{VdsName}. (User: USD{UserName}). 350 USER_ADD_BOOKMARK Info Bookmark USD{BookmarkName} was added by USD{UserName}. 351 USER_ADD_BOOKMARK_FAILED Error Failed to add bookmark: USD{BookmarkName} (User: USD{UserName}). 352 USER_UPDATE_BOOKMARK Info Bookmark USD{BookmarkName} was updated by USD{UserName}. 353 USER_UPDATE_BOOKMARK_FAILED Error Failed to update bookmark: USD{BookmarkName} (User: USD{UserName}) 354 USER_REMOVE_BOOKMARK Info Bookmark USD{BookmarkName} was removed by USD{UserName}. 355 USER_REMOVE_BOOKMARK_FAILED Error Failed to remove bookmark USD{BookmarkName} (User: USD{UserName}) 356 USER_REMOVE_SNAPSHOT_FINISHED_SUCCESS Info Snapshot 'USD{SnapshotName}' deletion for VM 'USD{VmName}' has been completed. 357 USER_REMOVE_SNAPSHOT_FINISHED_FAILURE Error Failed to delete snapshot 'USD{SnapshotName}' for VM 'USD{VmName}'. 358 USER_VM_POOL_MAX_SUBSEQUENT_FAILURES_REACHED Warning Not all VMs where successfully created in VM Pool USD{VmPoolName}. 359 USER_REMOVE_SNAPSHOT_FINISHED_FAILURE_PARTIAL_SNAPSHOT Warning Due to partial snapshot removal, Snapshot 'USD{SnapshotName}' of VM 'USD{VmName}' now contains only the following disks: 'USD{DiskAliases}'. 360 USER_DETACH_USER_FROM_VM Info User USD{AdUserName} was detached from VM USD{VmName} by USD{UserName}. 361 USER_FAILED_DETACH_USER_FROM_VM Error Failed to detach User USD{AdUserName} from VM USD{VmName} (User: USD{UserName}). 362 USER_REMOVE_SNAPSHOT_FINISHED_FAILURE_BASE_IMAGE_NOT_FOUND Error Failed to merge images of snapshot 'USD{SnapshotName}': base volume 'USD{BaseVolumeId}' is missing. This may have been caused by a failed attempt to remove the parent snapshot; if this is the case, please retry deletion of the parent snapshot before deleting this one. 370 USER_EXTEND_DISK_SIZE_FAILURE Error Failed to extend size of the disk 'USD{DiskAlias}' to USD{NewSize} GB, User: USD{UserName}. 371 USER_EXTEND_DISK_SIZE_SUCCESS Info Size of the disk 'USD{DiskAlias}' was successfully updated to USD{NewSize} GB by USD{UserName}. 372 USER_EXTEND_DISK_SIZE_UPDATE_VM_FAILURE Warning Failed to update VM 'USD{VmName}' with the new volume size. VM restart is recommended. 373 USER_REMOVE_DISK_SNAPSHOT Info Disk 'USD{DiskAlias}' from Snapshot(s) 'USD{Snapshots}' of VM 'USD{VmName}' deletion was initiated by USD{UserName}. 374 USER_FAILED_REMOVE_DISK_SNAPSHOT Error Failed to delete Disk 'USD{DiskAlias}' from Snapshot(s) USD{Snapshots} of VM USD{VmName} (User: USD{UserName}). 375 USER_REMOVE_DISK_SNAPSHOT_FINISHED_SUCCESS Info Disk 'USD{DiskAlias}' from Snapshot(s) 'USD{Snapshots}' of VM 'USD{VmName}' deletion has been completed (User: USD{UserName}). 376 USER_REMOVE_DISK_SNAPSHOT_FINISHED_FAILURE Error Failed to complete deletion of Disk 'USD{DiskAlias}' from snapshot(s) 'USD{Snapshots}' of VM 'USD{VmName}' (User: USD{UserName}). 377 USER_EXTENDED_DISK_SIZE Info Extending disk 'USD{DiskAlias}' to USD{NewSize} GB was initiated by USD{UserName}. 378 USER_REGISTER_DISK_FINISHED_SUCCESS Info Disk 'USD{DiskAlias}' has been successfully registered as a floating disk. 379 USER_REGISTER_DISK_FINISHED_FAILURE Error Failed to register Disk 'USD{DiskAlias}'. 380 USER_EXTEND_DISK_SIZE_UPDATE_HOST_FAILURE Warning Failed to refresh volume size on host 'USD{VdsName}'. Please try the operation again. 381 USER_REGISTER_DISK_INITIATED Info Registering Disk 'USD{DiskAlias}' has been initiated. 382 USER_REDUCE_DISK_FINISHED_SUCCESS Info Disk 'USD{DiskAlias}' has been successfully reduced. 383 USER_REDUCE_DISK_FINISHED_FAILURE Error Failed to reduce Disk 'USD{DiskAlias}'. 400 USER_ATTACH_VM_TO_AD_GROUP Info Group USD{GroupName} was attached to VM USD{VmName} by USD{UserName}. 401 USER_ATTACH_VM_TO_AD_GROUP_FAILED Error Failed to attach Group USD{GroupName} to VM USD{VmName} (User: USD{UserName}). 402 USER_DETACH_VM_TO_AD_GROUP Info Group USD{GroupName} was detached from VM USD{VmName} by USD{UserName}. 403 USER_DETACH_VM_TO_AD_GROUP_FAILED Error Failed to detach Group USD{GroupName} from VM USD{VmName} (User: USD{UserName}). 404 USER_ATTACH_VM_POOL_TO_AD_GROUP Info Group USD{GroupName} was attached to VM Pool USD{VmPoolName} by USD{UserName}. 405 USER_ATTACH_VM_POOL_TO_AD_GROUP_FAILED Error Failed to attach Group USD{GroupName} to VM Pool USD{VmPoolName} (User: USD{UserName}). 406 USER_DETACH_VM_POOL_TO_AD_GROUP Info Group USD{GroupName} was detached from VM Pool USD{VmPoolName} by USD{UserName}. 407 USER_DETACH_VM_POOL_TO_AD_GROUP_FAILED Error Failed to detach Group USD{GroupName} from VM Pool USD{VmPoolName} (User: USD{UserName}). 408 USER_REMOVE_AD_GROUP Info Group USD{GroupName} was removed by USD{UserName}. 409 USER_REMOVE_AD_GROUP_FAILED Error Failed to remove group USD{GroupName} (User: USD{UserName}). 430 USER_UPDATE_TAG Info Tag USD{TagName} configuration was updated by USD{UserName}. 431 USER_UPDATE_TAG_FAILED Error Failed to update Tag USD{TagName} (User: USD{UserName}). 432 USER_ADD_TAG Info New Tag USD{TagName} was created by USD{UserName}. 433 USER_ADD_TAG_FAILED Error Failed to create Tag named USD{TagName} (User: USD{UserName}). 434 USER_REMOVE_TAG Info Tag USD{TagName} was removed by USD{UserName}. 435 USER_REMOVE_TAG_FAILED Error Failed to remove Tag USD{TagName} (User: USD{UserName}). 436 USER_ATTACH_TAG_TO_USER Info Tag USD{TagName} was attached to User(s) USD{AttachUsersNames} by USD{UserName}. 437 USER_ATTACH_TAG_TO_USER_FAILED Error Failed to attach Tag USD{TagName} to User(s) USD{AttachUsersNames} (User: USD{UserName}). 438 USER_ATTACH_TAG_TO_USER_GROUP Info Tag USD{TagName} was attached to Group(s) USD{AttachGroupsNames} by USD{UserName}. 439 USER_ATTACH_TAG_TO_USER_GROUP_FAILED Error Failed to attach Group(s) USD{AttachGroupsNames} to Tag USD{TagName} (User: USD{UserName}). 440 USER_ATTACH_TAG_TO_VM Info Tag USD{TagName} was attached to VM(s) USD{VmsNames} by USD{UserName}. 441 USER_ATTACH_TAG_TO_VM_FAILED Error Failed to attach Tag USD{TagName} to VM(s) USD{VmsNames} (User: USD{UserName}). 442 USER_ATTACH_TAG_TO_VDS Info Tag USD{TagName} was attached to Host(s) USD{VdsNames} by USD{UserName}. 443 USER_ATTACH_TAG_TO_VDS_FAILED Error Failed to attach Tag USD{TagName} to Host(s) USD{VdsNames} (User: USD{UserName}). 444 USER_DETACH_VDS_FROM_TAG Info Tag USD{TagName} was detached from Host(s) USD{VdsNames} by USD{UserName}. 445 USER_DETACH_VDS_FROM_TAG_FAILED Error Failed to detach Tag USD{TagName} from Host(s) USD{VdsNames} (User: USD{UserName}). 446 USER_DETACH_VM_FROM_TAG Info Tag USD{TagName} was detached from VM(s) USD{VmsNames} by USD{UserName}. 447 USER_DETACH_VM_FROM_TAG_FAILED Error Failed to detach Tag USD{TagName} from VM(s) USD{VmsNames} (User: USD{UserName}). 448 USER_DETACH_USER_FROM_TAG Info Tag USD{TagName} detached from User(s) USD{DetachUsersNames} by USD{UserName}. 449 USER_DETACH_USER_FROM_TAG_FAILED Error Failed to detach Tag USD{TagName} from User(s) USD{DetachUsersNames} (User: USD{UserName}). 450 USER_DETACH_USER_GROUP_FROM_TAG Info Tag USD{TagName} was detached from Group(s) USD{DetachGroupsNames} by USD{UserName}. 451 USER_DETACH_USER_GROUP_FROM_TAG_FAILED Error Failed to detach Tag USD{TagName} from Group(s) USD{DetachGroupsNames} (User: USD{UserName}). 452 USER_ATTACH_TAG_TO_USER_EXISTS Warning Tag USD{TagName} already attached to User(s) USD{AttachUsersNamesExists}. 453 USER_ATTACH_TAG_TO_USER_GROUP_EXISTS Warning Tag USD{TagName} already attached to Group(s) USD{AttachGroupsNamesExists}. 454 USER_ATTACH_TAG_TO_VM_EXISTS Warning Tag USD{TagName} already attached to VM(s) USD{VmsNamesExists}. 455 USER_ATTACH_TAG_TO_VDS_EXISTS Warning Tag USD{TagName} already attached to Host(s) USD{VdsNamesExists}. 456 USER_LOGGED_IN_VM Info User USD{GuestUser} logged in to VM USD{VmName}. 457 USER_LOGGED_OUT_VM Info User USD{GuestUser} logged out from VM USD{VmName}. 458 USER_LOCKED_VM Info User USD{GuestUser} locked VM USD{VmName}. 459 USER_UNLOCKED_VM Info User USD{GuestUser} unlocked VM USD{VmName}. 460 USER_ATTACH_TAG_TO_TEMPLATE_EXISTS Warning Tag USD{TagName} already attached to Template(s) USD{TemplatesNamesExists}. 467 UPDATE_TAGS_VM_DEFAULT_DISPLAY_TYPE Info Vm USD{VmName} tag default display type was updated 468 UPDATE_TAGS_VM_DEFAULT_DISPLAY_TYPE_FAILED Info Failed to update Vm USD{VmName} tag default display type 470 USER_ATTACH_VM_POOL_TO_AD_GROUP_INTERNAL Info Group USD{GroupName} was attached to VM Pool USD{VmPoolName}. 471 USER_ATTACH_VM_POOL_TO_AD_GROUP_FAILED_INTERNAL Error Failed to attach Group USD{GroupName} to VM Pool USD{VmPoolName}. 472 USER_ATTACH_USER_TO_POOL_INTERNAL Info User USD{AdUserName} was attached to VM Pool USD{VmPoolName}. 473 USER_ATTACH_USER_TO_POOL_FAILED_INTERNAL Error Failed to attach User USD{AdUserName} to VM Pool USD{VmPoolName} (User: USD{UserName}). 493 VDS_ALREADY_IN_REQUESTED_STATUS Warning Host USD{HostName} is already USD{AgentStatus}, Power Management USD{Operation} operation skipped. 494 VDS_MANUAL_FENCE_STATUS Info Manual fence for host USD{VdsName} was started. 495 VDS_MANUAL_FENCE_STATUS_FAILED Error Manual fence for host USD{VdsName} failed. 496 VDS_FENCE_STATUS Info Host USD{VdsName} power management was verified successfully. 497 VDS_FENCE_STATUS_FAILED Error Failed to verify Host USD{VdsName} power management. 498 VDS_APPROVE Info Host USD{VdsName} was successfully approved by user USD{UserName}. 499 VDS_APPROVE_FAILED Error Failed to approve Host USD{VdsName}. 500 VDS_FAILED_TO_RUN_VMS Error Host USD{VdsName} will be switched to Error status for USD{Time} minutes because it failed to run a VM. 501 USER_SUSPEND_VM Info Suspending VM USD{VmName} was initiated by User USD{UserName} (Host: USD{VdsName}). 502 USER_FAILED_SUSPEND_VM Error Failed to suspend VM USD{VmName} (Host: USD{VdsName}). 503 USER_SUSPEND_VM_OK Info VM USD{VmName} on Host USD{VdsName} is suspended. 504 VDS_INSTALL Info Host USD{VdsName} installed 505 VDS_INSTALL_FAILED Error Host USD{VdsName} installation failed. USD{FailedInstallMessage}. 506 VDS_INITIATED_RUN_VM Info Trying to restart VM USD{VmName} on Host USD{VdsName} 509 VDS_INSTALL_IN_PROGRESS Info Installing Host USD{VdsName}. USD{Message}. 510 VDS_INSTALL_IN_PROGRESS_WARNING Warning Host USD{VdsName} installation in progress . USD{Message}. 511 VDS_INSTALL_IN_PROGRESS_ERROR Error An error has occurred during installation of Host USD{VdsName}: USD{Message}. 512 USER_SUSPEND_VM_FINISH_SUCCESS Info Suspending VM USD{VmName} has been completed. 513 VDS_RECOVER_FAILED_VMS_UNKNOWN Error Host USD{VdsName} cannot be reached, VMs state on this host are marked as Unknown. 514 VDS_INITIALIZING Warning Host USD{VdsName} is initializing. Message: USD{ErrorMessage} 515 VDS_CPU_LOWER_THAN_CLUSTER Warning Host USD{VdsName} moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : USD{CpuFlags} 516 VDS_CPU_RETRIEVE_FAILED Warning Failed to determine Host USD{VdsName} CPU level - could not retrieve CPU flags. 517 VDS_SET_NONOPERATIONAL Info Host USD{VdsName} moved to Non-Operational state. 518 VDS_SET_NONOPERATIONAL_FAILED Error Failed to move Host USD{VdsName} to Non-Operational state. 519 VDS_SET_NONOPERATIONAL_NETWORK Warning Host USD{VdsName} does not comply with the cluster USD{ClusterName} networks, the following networks are missing on host: 'USD{Networks}' 520 USER_ATTACH_USER_TO_VM Info User USD{AdUserName} was attached to VM USD{VmName} by USD{UserName}. 521 USER_SUSPEND_VM_FINISH_FAILURE Error Failed to complete suspending of VM USD{VmName}. 522 VDS_SET_NONOPERATIONAL_DOMAIN Warning Host USD{VdsName} cannot access the Storage Domain(s) USD{StorageDomainNames} attached to the Data Center USD{StoragePoolName}. Setting Host state to Non-Operational. 523 VDS_SET_NONOPERATIONAL_DOMAIN_FAILED Error Host USD{VdsName} cannot access the Storage Domain(s) USD{StorageDomainNames} attached to the Data Center USD{StoragePoolName}. Failed to set Host state to Non-Operational. 524 VDS_DOMAIN_DELAY_INTERVAL Warning Storage domain USD{StorageDomainName} experienced a high latency of USD{Delay} seconds from host USD{VdsName}. This may cause performance and functional issues. Please consult your Storage Administrator. 525 VDS_INITIATED_RUN_AS_STATELESS_VM_NOT_YET_RUNNING Info Starting VM USD{VmName} as stateless was initiated. 528 USER_EJECT_VM_DISK Info CD was ejected from VM USD{VmName} by USD{UserName}. 530 VDS_MANUAL_FENCE_FAILED_CALL_FENCE_SPM Warning Manual fence did not revoke the selected SPM (USD{VdsName}) since the master storage domain\n was not active or could not use another host for the fence operation. 531 VDS_LOW_MEM Warning Available memory of host USD{HostName} in cluster USD{Cluster} [USD{AvailableMemory} MB] is under defined threshold [USD{Threshold} MB]. 532 VDS_HIGH_MEM_USE Warning Used memory of host USD{HostName} in cluster USD{Cluster} [USD{UsedMemory}%] exceeded defined threshold [USD{Threshold}%]. 533 VDS_HIGH_NETWORK_USE Warning 534 VDS_HIGH_CPU_USE Warning Used CPU of host USD{HostName} [USD{UsedCpu}%] exceeded defined threshold [USD{Threshold}%]. 535 VDS_HIGH_SWAP_USE Warning Used swap memory of host USD{HostName} [USD{UsedSwap}%] exceeded defined threshold [USD{Threshold}%]. 536 VDS_LOW_SWAP Warning Available swap memory of host USD{HostName} [USD{AvailableSwapMemory} MB] is under defined threshold [USD{Threshold} MB]. 537 VDS_INITIATED_RUN_VM_AS_STATELESS Info VM USD{VmName} was restarted on Host USD{VdsName} as stateless 538 USER_RUN_VM_AS_STATELESS Info VM USD{VmName} started on Host USD{VdsName} as stateless 539 VDS_AUTO_FENCE_STATUS Info Auto fence for host USD{VdsName} was started. 540 VDS_AUTO_FENCE_STATUS_FAILED Error Auto fence for host USD{VdsName} failed. 541 VDS_AUTO_FENCE_FAILED_CALL_FENCE_SPM Warning Auto fence did not revoke the selected SPM (USD{VdsName}) since the master storage domain\n was not active or could not use another host for the fence operation. 550 VDS_PACKAGES_IN_PROGRESS Info Package update Host USD{VdsName}. USD{Message}. 551 VDS_PACKAGES_IN_PROGRESS_WARNING Warning Host USD{VdsName} update packages in progress . USD{Message}. 552 VDS_PACKAGES_IN_PROGRESS_ERROR Error Failed to update packages Host USD{VdsName}. USD{Message}. 555 USER_MOVE_TAG Info Tag USD{TagName} was moved from USD{OldParnetTagName} to USD{NewParentTagName} by USD{UserName}. 556 USER_MOVE_TAG_FAILED Error Failed to move Tag USD{TagName} from USD{OldParnetTagName} to USD{NewParentTagName} (User: USD{UserName}). 560 VDS_ANSIBLE_INSTALL_STARTED Info Ansible host-deploy playbook execution has started on host USD{VdsName}. 561 VDS_ANSIBLE_INSTALL_FINISHED Info Ansible host-deploy playbook execution has successfully finished on host USD{VdsName}. 562 VDS_ANSIBLE_HOST_REMOVE_STARTED Info Ansible host-remove playbook execution started on host USD{VdsName}. 563 VDS_ANSIBLE_HOST_REMOVE_FINISHED Info Ansible host-remove playbook execution has successfully finished on host USD{VdsName}. For more details check log USD{LogFile} 564 VDS_ANSIBLE_HOST_REMOVE_FAILED Warning Ansible host-remove playbook execution failed on host USD{VdsName}. For more details please check log USD{LogFile} 565 VDS_ANSIBLE_HOST_REMOVE_EXECUTION_FAILED Info Ansible host-remove playbook execution failed on host USD{VdsName} with message: USD{Message} 600 USER_VDS_MAINTENANCE Info Host USD{VdsName} was switched to Maintenance mode by USD{UserName} (Reason: USD{Reason}). 601 CPU_FLAGS_NX_IS_MISSING Warning Host USD{VdsName} is missing the NX cpu flag. This flag can be enabled via the host BIOS. Please set Disable Execute (XD) for an Intel host, or No Execute (NX) for AMD. Please make sure to completely power off the host for this change to take effect. 602 USER_VDS_MAINTENANCE_MIGRATION_FAILED Warning Host USD{VdsName} cannot change into maintenance mode - not all Vms have been migrated successfully. Consider manual intervention: stopping/migrating Vms: USD{failedVms} (User: USD{UserName}). 603 VDS_SET_NONOPERATIONAL_IFACE_DOWN Warning Host USD{VdsName} moved to Non-Operational state because interfaces which are down are needed by required networks in the current cluster: 'USD{NicsWithNetworks}'. 604 VDS_TIME_DRIFT_ALERT Warning Host USD{VdsName} has time-drift of USD{Actual} seconds while maximum configured value is USD{Max} seconds. 605 PROXY_HOST_SELECTION Info Host USD{Proxy} from USD{Origin} was chosen as a proxy to execute fencing on Host USD{VdsName}. 606 HOST_REFRESHED_CAPABILITIES Info Successfully refreshed the capabilities of host USD{VdsName}. 607 HOST_REFRESH_CAPABILITIES_FAILED Error Failed to refresh the capabilities of host USD{VdsName}. 608 HOST_INTERFACE_HIGH_NETWORK_USE Warning Host USD{HostName} has network interface which exceeded the defined threshold [USD{Threshold}%] (USD{InterfaceName}: transmit rate[USD{TransmitRate}%], receive rate [USD{ReceiveRate}%]) 609 HOST_INTERFACE_STATE_UP Normal Interface USD{InterfaceName} on host USD{VdsName}, changed state to up 610 HOST_INTERFACE_STATE_DOWN Warning Interface USD{InterfaceName} on host USD{VdsName}, changed state to down 611 HOST_BOND_SLAVE_STATE_UP Normal Slave USD{SlaveName} of bond USD{BondName} on host USD{VdsName}, changed state to up 612 HOST_BOND_SLAVE_STATE_DOWN Warning Slave USD{SlaveName} of bond USD{BondName} on host USD{VdsName}, changed state to down 613 FENCE_KDUMP_LISTENER_IS_NOT_ALIVE Error Unable to determine if Kdump is in progress on host USD{VdsName}, because fence_kdump listener is not running. 614 KDUMP_FLOW_DETECTED_ON_VDS Info Kdump flow is in progress on host USD{VdsName}. 615 KDUMP_FLOW_NOT_DETECTED_ON_VDS Info Kdump flow is not in progress on host USD{VdsName}. 616 KDUMP_FLOW_FINISHED_ON_VDS Info Kdump flow finished on host USD{VdsName}. 617 KDUMP_DETECTION_NOT_CONFIGURED_ON_VDS Warning Kdump integration is enabled for host USD{VdsName}, but kdump is not configured properly on host. 618 HOST_REGISTRATION_FAILED_INVALID_CLUSTER Info No default or valid cluster was found, Host USD{VdsName} registration failed 619 HOST_PROTOCOL_INCOMPATIBLE_WITH_CLUSTER Warning Host USD{VdsName} uses not compatible protocol during activation (xmlrpc instead of jsonrpc). Please examine installation logs and VDSM logs for failures and reinstall the host. 620 USER_VDS_MAINTENANCE_WITHOUT_REASON Info Host USD{VdsName} was switched to Maintenance mode by USD{UserName}. 650 USER_UNDO_RESTORE_FROM_SNAPSHOT_START Info Undoing a Snapshot-Preview for VM USD{VmName} was initialized by USD{UserName}. 651 USER_UNDO_RESTORE_FROM_SNAPSHOT_FINISH_SUCCESS Info Undoing a Snapshot-Preview for VM USD{VmName} has been completed. 652 USER_UNDO_RESTORE_FROM_SNAPSHOT_FINISH_FAILURE Error Failed to undo Snapshot-Preview for VM USD{VmName}. 700 DISK_ALIGNMENT_SCAN_START Info Starting alignment scan of disk 'USD{DiskAlias}'. 701 DISK_ALIGNMENT_SCAN_FAILURE Warning Alignment scan of disk 'USD{DiskAlias}' failed. 702 DISK_ALIGNMENT_SCAN_SUCCESS Info Alignment scan of disk 'USD{DiskAlias}' is complete. 809 USER_ADD_CLUSTER Info Cluster USD{ClusterName} was added by USD{UserName} 810 USER_ADD_CLUSTER_FAILED Error Failed to add Host cluster (User: USD{UserName}) 811 USER_UPDATE_CLUSTER Info Host cluster USD{ClusterName} was updated by USD{UserName} 812 USER_UPDATE_CLUSTER_FAILED Error Failed to update Host cluster (User: USD{UserName}) 813 USER_REMOVE_CLUSTER Info Host cluster USD{ClusterName} was removed by USD{UserName} 814 USER_REMOVE_CLUSTER_FAILED Error Failed to remove Host cluster (User: USD{UserName}) 815 USER_VDC_LOGOUT_FAILED Error Failed to log out user USD{UserName} connected from 'USD{SourceIP}' using session 'USD{SessionID}'. 816 MAC_POOL_EMPTY Warning No MAC addresses left in the MAC Address Pool. 817 CERTIFICATE_FILE_NOT_FOUND Error Could not find oVirt Engine Certificate file. 818 RUN_VM_FAILED Error Cannot run VM USD{VmName} on Host USD{VdsName}. Error: USD{ErrMsg} 819 VDS_REGISTER_ERROR_UPDATING_HOST Error Host registration failed - cannot update Host Name for Host USD{VdsName2}. (Host: USD{VdsName1}) 820 VDS_REGISTER_ERROR_UPDATING_HOST_ALL_TAKEN Error Host registration failed - all available Host Names are taken. (Host: USD{VdsName1}) 821 VDS_REGISTER_HOST_IS_ACTIVE Error Host registration failed - cannot change Host Name of active Host USD{VdsName2}. (Host: USD{VdsName1}) 822 VDS_REGISTER_ERROR_UPDATING_NAME Error Host registration failed - cannot update Host Name for Host USD{VdsName2}. (Host: USD{VdsName1}) 823 VDS_REGISTER_ERROR_UPDATING_NAMES_ALL_TAKEN Error Host registration failed - all available Host Names are taken. (Host: USD{VdsName1}) 824 VDS_REGISTER_NAME_IS_ACTIVE Error Host registration failed - cannot change Host Name of active Host USD{VdsName2}. (Host: USD{VdsName1}) 825 VDS_REGISTER_AUTO_APPROVE_PATTERN Error Host registration failed - auto approve pattern error. (Host: USD{VdsName1}) 826 VDS_REGISTER_FAILED Error Host registration failed. (Host: USD{VdsName1}) 827 VDS_REGISTER_EXISTING_VDS_UPDATE_FAILED Error Host registration failed - cannot update existing Host. (Host: USD{VdsName1}) 828 VDS_REGISTER_SUCCEEDED Info Host USD{VdsName1} registered. 829 VM_MIGRATION_ON_CONNECT_CHECK_FAILED Error VM migration logic failed. (VM name: USD{VmName}) 830 VM_MIGRATION_ON_CONNECT_CHECK_SUCCEEDED Info Migration check failed to execute. 831 USER_VDC_SESSION_TERMINATED Info User USD{UserName} forcibly logged out user USD{TerminatedSessionUsername} connected from 'USD{SourceIP}' using session 'USD{SessionID}'. 832 USER_VDC_SESSION_TERMINATION_FAILED Error User USD{UserName} failed to forcibly log out user USD{TerminatedSessionUsername} connected from 'USD{SourceIP}' using session 'USD{SessionID}'. 833 MAC_ADDRESS_IS_IN_USE Warning Network Interface USD{IfaceName} has MAC address USD{MACAddr} which is in use. 834 VDS_REGISTER_EMPTY_ID Warning Host registration failed, empty host id (Host: USD{VdsHostName}) 835 SYSTEM_UPDATE_CLUSTER Info Host cluster USD{ClusterName} was updated by system 836 SYSTEM_UPDATE_CLUSTER_FAILED Info Failed to update Host cluster by system 837 MAC_ADDRESSES_POOL_NOT_INITIALIZED Warning Mac Address Pool is not initialized. USD{Message} 838 MAC_ADDRESS_IS_IN_USE_UNPLUG Warning Network Interface USD{IfaceName} has MAC address USD{MACAddr} which is in use, therefore it is being unplugged from VM USD{VmName}. 839 HOST_AVAILABLE_UPDATES_FAILED Error Failed to check for available updates on host USD{VdsName} with message 'USD{Message}'. 840 HOST_UPGRADE_STARTED Info Host USD{VdsName} upgrade was started (User: USD{UserName}). 841 HOST_UPGRADE_FAILED Error Failed to upgrade Host USD{VdsName} (User: USD{UserName}). 842 HOST_UPGRADE_FINISHED Info Host USD{VdsName} upgrade was completed successfully. 845 HOST_CERTIFICATION_IS_ABOUT_TO_EXPIRE Warning Host USD{VdsName} certification is about to expire at USD{ExpirationDate}. Please renew the host's certification. 846 ENGINE_CERTIFICATION_HAS_EXPIRED Info Engine's certification has expired at USD{ExpirationDate}. Please renew the engine's certification. 847 ENGINE_CERTIFICATION_IS_ABOUT_TO_EXPIRE Warning Engine's certification is about to expire at USD{ExpirationDate}. Please renew the engine's certification. 848 ENGINE_CA_CERTIFICATION_HAS_EXPIRED Info Engine's CA certification has expired at USD{ExpirationDate}. 849 ENGINE_CA_CERTIFICATION_IS_ABOUT_TO_EXPIRE Warning Engine's CA certification is about to expire at USD{ExpirationDate}. 850 USER_ADD_PERMISSION Info User/Group USD{SubjectName}, Namespace USD{Namespace}, Authorization provider: USD{Authz} was granted permission for Role USD{RoleName} on USD{VdcObjectType} USD{VdcObjectName}, by USD{UserName}. 851 USER_ADD_PERMISSION_FAILED Error User USD{UserName} failed to grant permission for Role USD{RoleName} on USD{VdcObjectType} USD{VdcObjectName} to User/Group USD{SubjectName}. 852 USER_REMOVE_PERMISSION Info User/Group USD{SubjectName} Role USD{RoleName} permission was removed from USD{VdcObjectType} USD{VdcObjectName} by USD{UserName} 853 USER_REMOVE_PERMISSION_FAILED Error User USD{UserName} failed to remove permission for Role USD{RoleName} from USD{VdcObjectType} USD{VdcObjectName} to User/Group USD{SubjectName} 854 USER_ADD_ROLE Info Role USD{RoleName} granted to USD{UserName} 855 USER_ADD_ROLE_FAILED Error Failed to grant role USD{RoleName} (User USD{UserName}) 856 USER_UPDATE_ROLE Info USD{UserName} Role was updated to the USD{RoleName} Role 857 USER_UPDATE_ROLE_FAILED Error Failed to update role USD{RoleName} to USD{UserName} 858 USER_REMOVE_ROLE Info Role USD{RoleName} removed from USD{UserName} 859 USER_REMOVE_ROLE_FAILED Error Failed to remove role USD{RoleName} (User USD{UserName}) 860 USER_ATTACHED_ACTION_GROUP_TO_ROLE Info Action group USD{ActionGroup} was attached to Role USD{RoleName} by USD{UserName} 861 USER_ATTACHED_ACTION_GROUP_TO_ROLE_FAILED Error Failed to attach Action group USD{ActionGroup} to Role USD{RoleName} (User: USD{UserName}) 862 USER_DETACHED_ACTION_GROUP_FROM_ROLE Info Action group USD{ActionGroup} was detached from Role USD{RoleName} by USD{UserName} 863 USER_DETACHED_ACTION_GROUP_FROM_ROLE_FAILED Error Failed to attach Action group USD{ActionGroup} to Role USD{RoleName} by USD{UserName} 864 USER_ADD_ROLE_WITH_ACTION_GROUP Info Role USD{RoleName} was added by USD{UserName} 865 USER_ADD_ROLE_WITH_ACTION_GROUP_FAILED Error Failed to add role USD{RoleName} 866 USER_ADD_SYSTEM_PERMISSION Info User/Group USD{SubjectName} was granted permission for Role USD{RoleName} on USD{VdcObjectType} by USD{UserName}. 867 USER_ADD_SYSTEM_PERMISSION_FAILED Error User USD{UserName} failed to grant permission for Role USD{RoleName} on USD{VdcObjectType} to User/Group USD{SubjectName}. 868 USER_REMOVE_SYSTEM_PERMISSION Info User/Group USD{SubjectName} Role USD{RoleName} permission was removed from USD{VdcObjectType} by USD{UserName} 869 USER_REMOVE_SYSTEM_PERMISSION_FAILED Error User USD{UserName} failed to remove permission for Role USD{RoleName} from USD{VdcObjectType} to User/Group USD{SubjectName} 870 USER_ADD_PROFILE Info Profile created for USD{UserName} 871 USER_ADD_PROFILE_FAILED Error Failed to create profile for USD{UserName} 872 USER_UPDATE_PROFILE Info Updated profile for USD{UserName} 873 USER_UPDATE_PROFILE_FAILED Error Failed to update profile for USD{UserName} 874 USER_REMOVE_PROFILE Info Removed profile for USD{UserName} 875 USER_REMOVE_PROFILE_FAILED Error Failed to remove profile for USD{UserName} 876 HOST_CERTIFICATION_IS_INVALID Error Host USD{VdsName} certification is invalid. The certification has no peer certificates. 877 HOST_CERTIFICATION_HAS_EXPIRED Info Host USD{VdsName} certification has expired at USD{ExpirationDate}. Please renew the host's certification. 878 ENGINE_CERTIFICATION_IS_ABOUT_TO_EXPIRE_ALERT Info Engine's certification is about to expire at USD{ExpirationDate}. Please renew the engine's certification. 879 HOST_CERTIFICATION_IS_ABOUT_TO_EXPIRE_ALERT Info Host USD{VdsName} certification is about to expire at USD{ExpirationDate}. Please renew the host's certification. 880 HOST_CERTIFICATION_ENROLLMENT_STARTED Normal Enrolling certificate for host USD{VdsName} was started (User: USD{UserName}). 881 HOST_CERTIFICATION_ENROLLMENT_FINISHED Normal Enrolling certificate for host USD{VdsName} was completed successfully (User: USD{UserName}). 882 HOST_CERTIFICATION_ENROLLMENT_FAILED Error Failed to enroll certificate for host USD{VdsName} (User: USD{UserName}). 883 ENGINE_CA_CERTIFICATION_IS_ABOUT_TO_EXPIRE_ALERT Info Engine's CA certification is about to expire at USD{ExpirationDate}. 884 HOST_AVAILABLE_UPDATES_STARTED Info Started to check for available updates on host USD{VdsName}. 885 HOST_AVAILABLE_UPDATES_FINISHED Info Check for available updates on host USD{VdsName} was completed successfully with message 'USD{Message}'. 886 HOST_AVAILABLE_UPDATES_PROCESS_IS_ALREADY_RUNNING Warning Failed to check for available updates on host USD{VdsName}: Another process is already running. 887 HOST_AVAILABLE_UPDATES_SKIPPED_UNSUPPORTED_STATUS Warning Failed to check for available updates on host USD{VdsName}: Unsupported host status. 890 HOST_UPGRADE_FINISHED_MANUAL_HA Warning Host USD{VdsName} upgrade was completed successfully, but the Hosted Engine HA service may still be in maintenance mode. If necessary, please correct this manually. 900 AD_COMPUTER_ACCOUNT_SUCCEEDED Info Account creation successful. 901 AD_COMPUTER_ACCOUNT_FAILED Error Account creation failed. 918 USER_FORCE_REMOVE_STORAGE_POOL Info Data Center USD{StoragePoolName} was forcibly removed by USD{UserName} 919 USER_FORCE_REMOVE_STORAGE_POOL_FAILED Error Failed to forcibly remove Data Center USD{StoragePoolName}. (User: USD{UserName}) 925 MAC_ADDRESS_IS_EXTERNAL Warning VM USD{VmName} has MAC address(es) USD{MACAddr}, which is/are out of its MAC pool definitions. 926 NETWORK_REMOVE_BOND Info Remove bond: USD{BondName} for Host: USD{VdsName} (User:USD{UserName}). 927 NETWORK_REMOVE_BOND_FAILED Error Failed to remove bond: USD{BondName} for Host: USD{VdsName} (User:USD{UserName}). 928 NETWORK_VDS_NETWORK_MATCH_CLUSTER Info Vds USD{VdsName} network match to cluster USD{ClusterName} 929 NETWORK_VDS_NETWORK_NOT_MATCH_CLUSTER Error Vds USD{VdsName} network does not match to cluster USD{ClusterName} 930 NETWORK_REMOVE_VM_INTERFACE Info Interface USD{InterfaceName} (USD{InterfaceType}) was removed from VM USD{VmName}. (User: USD{UserName}) 931 NETWORK_REMOVE_VM_INTERFACE_FAILED Error Failed to remove Interface USD{InterfaceName} (USD{InterfaceType}) from VM USD{VmName}. (User: USD{UserName}) 932 NETWORK_ADD_VM_INTERFACE Info Interface USD{InterfaceName} (USD{InterfaceType}) was added to VM USD{VmName}. (User: USD{UserName}) 933 NETWORK_ADD_VM_INTERFACE_FAILED Error Failed to add Interface USD{InterfaceName} (USD{InterfaceType}) to VM USD{VmName}. (User: USD{UserName}) 934 NETWORK_UPDATE_VM_INTERFACE Info Interface USD{InterfaceName} (USD{InterfaceType}) was updated for VM USD{VmName}. USD{LinkState} (User: USD{UserName}) 935 NETWORK_UPDATE_VM_INTERFACE_FAILED Error Failed to update Interface USD{InterfaceName} (USD{InterfaceType}) for VM USD{VmName}. (User: USD{UserName}) 936 NETWORK_ADD_TEMPLATE_INTERFACE Info Interface USD{InterfaceName} (USD{InterfaceType}) was added to Template USD{VmTemplateName}. (User: USD{UserName}) 937 NETWORK_ADD_TEMPLATE_INTERFACE_FAILED Error Failed to add Interface USD{InterfaceName} (USD{InterfaceType}) to Template USD{VmTemplateName}. (User: USD{UserName}) 938 NETWORK_REMOVE_TEMPLATE_INTERFACE Info Interface USD{InterfaceName} (USD{InterfaceType}) was removed from Template USD{VmTemplateName}. (User: USD{UserName}) 939 NETWORK_REMOVE_TEMPLATE_INTERFACE_FAILED Error Failed to remove Interface USD{InterfaceName} (USD{InterfaceType}) from Template USD{VmTemplateName}. (User: USD{UserName}) 940 NETWORK_UPDATE_TEMPLATE_INTERFACE Info Interface USD{InterfaceName} (USD{InterfaceType}) was updated for Template USD{VmTemplateName}. (User: USD{UserName}) 941 NETWORK_UPDATE_TEMPLATE_INTERFACE_FAILED Error Failed to update Interface USD{InterfaceName} (USD{InterfaceType}) for Template USD{VmTemplateName}. (User: USD{UserName}) 942 NETWORK_ADD_NETWORK Info Network USD{NetworkName} was added to Data Center: USD{StoragePoolName} 943 NETWORK_ADD_NETWORK_FAILED Error Failed to add Network USD{NetworkName} to Data Center: USD{StoragePoolName} 944 NETWORK_REMOVE_NETWORK Info Network USD{NetworkName} was removed from Data Center: USD{StoragePoolName} 945 NETWORK_REMOVE_NETWORK_FAILED Error Failed to remove Network USD{NetworkName} from Data Center: USD{StoragePoolName} 946 NETWORK_ATTACH_NETWORK_TO_CLUSTER Info Network USD{NetworkName} attached to Cluster USD{ClusterName} 947 NETWORK_ATTACH_NETWORK_TO_CLUSTER_FAILED Error Failed to attach Network USD{NetworkName} to Cluster USD{ClusterName} 948 NETWORK_DETACH_NETWORK_TO_CLUSTER Info Network USD{NetworkName} detached from Cluster USD{ClusterName} 949 NETWORK_DETACH_NETWORK_TO_CLUSTER_FAILED Error Failed to detach Network USD{NetworkName} from Cluster USD{ClusterName} 950 USER_ADD_STORAGE_POOL Info Data Center USD{StoragePoolName}, Compatibility Version USD{CompatibilityVersion} and Quota Type USD{QuotaEnforcementType} was added by USD{UserName} 951 USER_ADD_STORAGE_POOL_FAILED Error Failed to add Data Center USD{StoragePoolName}. (User: USD{UserName}) 952 USER_UPDATE_STORAGE_POOL Info Data Center USD{StoragePoolName} was updated by USD{UserName} 953 USER_UPDATE_STORAGE_POOL_FAILED Error Failed to update Data Center USD{StoragePoolName}. (User: USD{UserName}) 954 USER_REMOVE_STORAGE_POOL Info Data Center USD{StoragePoolName} was removed by USD{UserName} 955 USER_REMOVE_STORAGE_POOL_FAILED Error Failed to remove Data Center USD{StoragePoolName}. (User: USD{UserName}) 956 USER_ADD_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} was added by USD{UserName} 957 USER_ADD_STORAGE_DOMAIN_FAILED Error Failed to add Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 958 USER_UPDATE_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} was updated by USD{UserName} 959 USER_UPDATE_STORAGE_DOMAIN_FAILED Error Failed to update Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 960 USER_REMOVE_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} was removed by USD{UserName} 961 USER_REMOVE_STORAGE_DOMAIN_FAILED Error Failed to remove Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 962 USER_ATTACH_STORAGE_DOMAIN_TO_POOL Info Storage Domain USD{StorageDomainName} was attached to Data Center USD{StoragePoolName} by USD{UserName} 963 USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED Error Failed to attach Storage Domain USD{StorageDomainName} to Data Center USD{StoragePoolName}. (User: USD{UserName}) 964 USER_DETACH_STORAGE_DOMAIN_FROM_POOL Info Storage Domain USD{StorageDomainName} was detached from Data Center USD{StoragePoolName} by USD{UserName} 965 USER_DETACH_STORAGE_DOMAIN_FROM_POOL_FAILED Error Failed to detach Storage Domain USD{StorageDomainName} from Data Center USD{StoragePoolName}. (User: USD{UserName}) 966 USER_ACTIVATED_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}) was activated by USD{UserName} 967 USER_ACTIVATE_STORAGE_DOMAIN_FAILED Error Failed to activate Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}) by USD{UserName} 968 USER_DEACTIVATED_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}) was deactivated and has moved to 'Preparing for maintenance' until it will no longer be accessed by any Host of the Data Center. 969 USER_DEACTIVATE_STORAGE_DOMAIN_FAILED Error Failed to deactivate Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}). 970 SYSTEM_DEACTIVATED_STORAGE_DOMAIN Warning Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}) was deactivated by system because it's not visible by any of the hosts. 971 SYSTEM_DEACTIVATE_STORAGE_DOMAIN_FAILED Error Failed to deactivate Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}). 972 USER_EXTENDED_STORAGE_DOMAIN Info Storage USD{StorageDomainName} has been extended by USD{UserName}. Please wait for refresh. 973 USER_EXTENDED_STORAGE_DOMAIN_FAILED Error Failed to extend Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 974 USER_REMOVE_VG Info Volume group USD{VgId} was removed by USD{UserName}. 975 USER_REMOVE_VG_FAILED Error Failed to remove Volume group USD{VgId}. (User: UserName) 976 USER_ACTIVATE_STORAGE_POOL Info Data Center USD{StoragePoolName} was activated. (User: USD{UserName}) 977 USER_ACTIVATE_STORAGE_POOL_FAILED Error Failed to activate Data Center USD{StoragePoolName}. (User: USD{UserName}) 978 SYSTEM_FAILED_CHANGE_STORAGE_POOL_STATUS Error Failed to change Data Center USD{StoragePoolName} status. 979 SYSTEM_CHANGE_STORAGE_POOL_STATUS_NO_HOST_FOR_SPM Error Fencing failed on Storage Pool Manager USD{VdsName} for Data Center USD{StoragePoolName}. Setting status to Non-Operational. 980 SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC Warning Invalid status on Data Center USD{StoragePoolName}. Setting status to Non Responsive. 981 USER_FORCE_REMOVE_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} was forcibly removed by USD{UserName} 982 USER_FORCE_REMOVE_STORAGE_DOMAIN_FAILED Error Failed to forcibly remove Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 983 RECONSTRUCT_MASTER_FAILED_NO_MASTER Warning No valid Data Storage Domains are available in Data Center USD{StoragePoolName} (please check your storage infrastructure). 984 RECONSTRUCT_MASTER_DONE Info Reconstruct Master Domain for Data Center USD{StoragePoolName} completed. 985 RECONSTRUCT_MASTER_FAILED Error Failed to Reconstruct Master Domain for Data Center USD{StoragePoolName}. 986 SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC_SEARCHING_NEW_SPM Warning Data Center is being initialized, please wait for initialization to complete. 987 SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC_WITH_ERROR Warning Invalid status on Data Center USD{StoragePoolName}. Setting Data Center status to Non Responsive (On host USD{VdsName}, Error: USD{Error}). 988 USER_CONNECT_HOSTS_TO_LUN_FAILED Error Failed to connect Host USD{VdsName} to device. (User: USD{UserName}) 989 SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC_FROM_NON_OPERATIONAL Info Try to recover Data Center USD{StoragePoolName}. Setting status to Non Responsive. 990 SYSTEM_MASTER_DOMAIN_NOT_IN_SYNC Warning Sync Error on Master Domain between Host USD{VdsName} and oVirt Engine. Domain: USD{StorageDomainName} is marked as Master in oVirt Engine database but not on the Storage side. Please consult with Support on how to fix this issue. 991 RECOVERY_STORAGE_POOL Info Data Center USD{StoragePoolName} was recovered by USD{UserName} 992 RECOVERY_STORAGE_POOL_FAILED Error Failed to recover Data Center USD{StoragePoolName} (User:USD{UserName}) 993 SYSTEM_CHANGE_STORAGE_POOL_STATUS_RESET_IRS Info Data Center USD{StoragePoolName} was reset. Setting status to Non Responsive (Elect new Storage Pool Manager). 994 CONNECT_STORAGE_SERVERS_FAILED Warning Failed to connect Host USD{VdsName} to Storage Servers 995 CONNECT_STORAGE_POOL_FAILED Warning Failed to connect Host USD{VdsName} to Storage Pool USD{StoragePoolName} 996 STORAGE_DOMAIN_ERROR Error The error message for connection USD{Connection} returned by VDSM was: USD{ErrorMessage} 997 REFRESH_REPOSITORY_IMAGE_LIST_FAILED Error Refresh image list failed for domain(s): USD{imageDomains}. Please check domain activity. 998 REFRESH_REPOSITORY_IMAGE_LIST_SUCCEEDED Info Refresh image list succeeded for domain(s): USD{imageDomains} 999 STORAGE_ALERT_VG_METADATA_CRITICALLY_FULL Error The system has reached the 80% watermark on the VG metadata area size on USD{StorageDomainName}.\nThis is due to a high number of Vdisks or large Vdisks size allocated on this specific VG. 1000 STORAGE_ALERT_SMALL_VG_METADATA Warning The allocated VG metadata area size is smaller than 50MB on USD{StorageDomainName},\nwhich might limit its capacity (the number of Vdisks and/or their size). 1001 USER_RUN_VM_FAILURE_STATELESS_SNAPSHOT_LEFT Error Failed to start VM USD{VmName}, because exist snapshot for stateless state. Snapshot will be deleted. 1002 USER_ATTACH_STORAGE_DOMAINS_TO_POOL Info Storage Domains were attached to Data Center USD{StoragePoolName} by USD{UserName} 1003 USER_ATTACH_STORAGE_DOMAINS_TO_POOL_FAILED Error Failed to attach Storage Domains to Data Center USD{StoragePoolName}. (User: USD{UserName}) 1004 STORAGE_DOMAIN_TASKS_ERROR Warning Storage Domain USD{StorageDomainName} is down while there are tasks running on it. These tasks may fail. 1005 UPDATE_OVF_FOR_STORAGE_POOL_FAILED Warning Failed to update VMs/Templates OVF data in Data Center USD{StoragePoolName}. 1006 UPGRADE_STORAGE_POOL_ENCOUNTERED_PROBLEMS Warning Data Center USD{StoragePoolName} has encountered problems during upgrade process. 1007 REFRESH_REPOSITORY_IMAGE_LIST_INCOMPLETE Warning Refresh image list probably incomplete for domain USD{imageDomain}, only USD{imageListSize} images discovered. 1008 NUMBER_OF_LVS_ON_STORAGE_DOMAIN_EXCEEDED_THRESHOLD Warning The number of LVs on the domain USD{storageDomainName} exceeded USD{maxNumOfLVs}, you are approaching the limit where performance may degrade. 1009 USER_DEACTIVATE_STORAGE_DOMAIN_OVF_UPDATE_INCOMPLETE Warning Failed to deactivate Storage Domain USD{StorageDomainName} as the engine was restarted during the operation, please retry. (Data Center USD{StoragePoolName}). 1010 RELOAD_CONFIGURATIONS_SUCCESS Info System Configurations reloaded successfully. 1011 RELOAD_CONFIGURATIONS_FAILURE Error System Configurations failed to reload. 1012 NETWORK_ACTIVATE_VM_INTERFACE_SUCCESS Info Network Interface USD{InterfaceName} (USD{InterfaceType}) was plugged to VM USD{VmName}. (User: USD{UserName}) 1013 NETWORK_ACTIVATE_VM_INTERFACE_FAILURE Error Failed to plug Network Interface USD{InterfaceName} (USD{InterfaceType}) to VM USD{VmName}. (User: USD{UserName}) 1014 NETWORK_DEACTIVATE_VM_INTERFACE_SUCCESS Info Network Interface USD{InterfaceName} (USD{InterfaceType}) was unplugged from VM USD{VmName}. (User: USD{UserName}) 1015 NETWORK_DEACTIVATE_VM_INTERFACE_FAILURE Error Failed to unplug Network Interface USD{InterfaceName} (USD{InterfaceType}) from VM USD{VmName}. (User: USD{UserName}) 1016 UPDATE_FOR_OVF_STORES_FAILED Warning Failed to update OVF disks USD{DisksIds}, OVF data isn't updated on those OVF stores (Data Center USD{DataCenterName}, Storage Domain USD{StorageDomainName}). 1017 RETRIEVE_OVF_STORE_FAILED Warning Failed to retrieve VMs and Templates from the OVF disk of Storage Domain USD{StorageDomainName}. 1018 OVF_STORE_DOES_NOT_EXISTS Warning This Data center compatibility version does not support importing a data domain with its entities (VMs and Templates). The imported domain will be imported without them. 1019 UPDATE_DESCRIPTION_FOR_DISK_FAILED Error Failed to update the meta data description of disk USD{DiskName} (Data Center USD{DataCenterName}, Storage Domain USD{StorageDomainName}). 1020 UPDATE_DESCRIPTION_FOR_DISK_SKIPPED_SINCE_STORAGE_DOMAIN_NOT_ACTIVE Warning Not updating the metadata of Disk USD{DiskName} (Data Center USD{DataCenterName}. Since the Storage Domain USD{StorageDomainName} is not in active. 1022 USER_REFRESH_LUN_STORAGE_DOMAIN Info Resize LUNs operation succeeded. 1023 USER_REFRESH_LUN_STORAGE_DOMAIN_FAILED Error Failed to resize LUNs. 1024 USER_REFRESH_LUN_STORAGE_DIFFERENT_SIZE_DOMAIN_FAILED Error Failed to resize LUNs.\n Not all the hosts are seeing the same LUN size. 1025 VM_PAUSED Info VM USD{VmName} has been paused. 1026 FAILED_TO_STORE_ENTIRE_DISK_FIELD_IN_DISK_DESCRIPTION_METADATA Warning Failed to store field USD{DiskFieldName} as a part of USD{DiskAlias}'s description metadata due to storage space limitations. The field USD{DiskFieldName} will be truncated. 1027 FAILED_TO_STORE_ENTIRE_DISK_FIELD_AND_REST_OF_FIELDS_IN_DISK_DESCRIPTION_METADATA Warning Failed to store field USD{DiskFieldName} as a part of USD{DiskAlias}'s description metadata due to storage space limitations. The value will be truncated and the following fields will not be stored at all: USD{DiskFieldsNames}. 1028 FAILED_TO_STORE_DISK_FIELDS_IN_DISK_DESCRIPTION_METADATA Warning Failed to store the following fields in the description metadata of disk USD{DiskAlias} due to storage space limitations: USD{DiskFieldsNames}. 1029 STORAGE_DOMAIN_MOVED_TO_MAINTENANCE Info Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}) successfully moved to Maintenance as it's no longer accessed by any Host of the Data Center. 1030 USER_DEACTIVATED_LAST_MASTER_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}) was deactivated. 1031 TRANSFER_IMAGE_INITIATED Info Image USD{TransferType} with disk USD{DiskAlias} was initiated by USD{UserName}. 1032 TRANSFER_IMAGE_SUCCEEDED Info Image USD{TransferType} with disk USD{DiskAlias} succeeded. 1033 TRANSFER_IMAGE_CANCELLED Info Image USD{TransferType} with disk USD{DiskAlias} was cancelled. 1034 TRANSFER_IMAGE_FAILED Error Image USD{TransferType} with disk USD{DiskAlias} failed. 1035 TRANSFER_IMAGE_TEARDOWN_FAILED Info Failed to tear down image USD{DiskAlias} after image transfer session. 1036 USER_SCAN_STORAGE_DOMAIN_FOR_UNREGISTERED_DISKS Info Storage Domain USD{StorageDomainName} has finished to scan for unregistered disks by USD{UserName}. 1037 USER_SCAN_STORAGE_DOMAIN_FOR_UNREGISTERED_DISKS_FAILED Error Storage Domain USD{StorageDomainName} failed to scan for unregistered disks by USD{UserName}. 1039 LUNS_BROKE_SD_PASS_DISCARD_SUPPORT Warning Luns with IDs: [USD{LunsIds}] were updated in the DB but caused the storage domain USD{StorageDomainName} (ID USD{storageDomainId}) to stop supporting passing discard from the guest to the underlying storage. Please configure these luns' discard support in the underlying storage or disable 'Enable Discard' for vm disks on this storage domain. 1040 DISKS_WITH_ILLEGAL_PASS_DISCARD_EXIST Warning Disks with IDs: [USD{DisksIds}] have their 'Enable Discard' on even though the underlying storage does not support it. Please configure the underlying storage to support discard or disable 'Enable Discard' for these disks. 1041 USER_REMOVE_DEVICE_FROM_STORAGE_DOMAIN_FAILED Error Failed to remove USD{LunId} from Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 1042 USER_REMOVE_DEVICE_FROM_STORAGE_DOMAIN Info USD{LunId} was removed from Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 1043 USER_REMOVE_DEVICE_FROM_STORAGE_DOMAIN_STARTED Info Started to remove USD{LunId} from Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 1044 ILLEGAL_STORAGE_DOMAIN_DISCARD_AFTER_DELETE Warning The storage domain with id USD{storageDomainId} has its 'Discard After Delete' enabled even though the underlying storage does not support discard. Therefore, disks and snapshots on this storage domain will not be discarded before they are removed. 1045 LUNS_BROKE_SD_DISCARD_AFTER_DELETE_SUPPORT Warning Luns with IDs: [USD{LunsIds}] were updated in the DB but caused the storage domain USD{StorageDomainName} (ID USD{storageDomainId}) to stop supporting discard after delete. Please configure these luns' discard support in the underlying storage or disable 'Discard After Delete' for this storage domain. 1046 STORAGE_DOMAINS_COULD_NOT_BE_SYNCED Info Storage domains with IDs [USD{StorageDomainsIds}] could not be synchronized. To synchronize them, please move them to maintenance and then activate. 1048 DIRECT_LUNS_COULD_NOT_BE_SYNCED Info Direct LUN disks with IDs [USD{DirectLunDisksIds}] could not be synchronized because there was no active host in the data center. Please synchronize them to get their latest information from the storage. 1052 OVF_STORES_UPDATE_IGNORED Normal OVFs update was ignored - nothing to update for storage domain 'USD{StorageDomainName}' 1060 UPLOAD_IMAGE_CLIENT_ERROR Error Unable to upload image to disk USD{DiskId} due to a client error. Make sure the selected file is readable. 1061 UPLOAD_IMAGE_XHR_TIMEOUT_ERROR Error Unable to upload image to disk USD{DiskId} due to a request timeout error. The upload bandwidth might be too slow. Please try to reduce the chunk size: 'engine-config -s UploadImageChunkSizeKB 1062 UPLOAD_IMAGE_NETWORK_ERROR Error Unable to upload image to disk USD{DiskId} due to a network error. Ensure that ovirt-imageio service is installed and configured and that ovirt-engine's CA certificate is registered as a trusted CA in the browser. The certificate can be fetched from USD{EngineUrl}/ovirt-engine/services/pki-resource?resource 1063 DOWNLOAD_IMAGE_NETWORK_ERROR Error Unable to download disk USD{DiskId} due to a network error. Make sure ovirt-imageio service is installed and configured, and ovirt-engine's certificate is registered as a valid CA in the browser. The certificate can be fetched from https://<engine_url>/ovirt-engine/services/pki-resource?resource 1064 TRANSFER_IMAGE_STOPPED_BY_SYSTEM_TICKET_RENEW_FAILURE Error Transfer was stopped by system. Reason: failure in transfer image ticket renewal. 1065 TRANSFER_IMAGE_STOPPED_BY_SYSTEM_MISSING_TICKET Error Transfer was stopped by system. Reason: missing transfer image ticket. 1067 TRANSFER_IMAGE_STOPPED_BY_SYSTEM_MISSING_HOST Error Transfer was stopped by system. Reason: Could not find a suitable host for image data transfer. 1068 TRANSFER_IMAGE_STOPPED_BY_SYSTEM_FAILED_TO_CREATE_TICKET Error Transfer was stopped by system. Reason: failed to create a signed image ticket. 1069 TRANSFER_IMAGE_STOPPED_BY_SYSTEM_FAILED_TO_ADD_TICKET_TO_DAEMON Error Transfer was stopped by system. Reason: failed to add image ticket to ovirt-imageio-daemon. 1070 TRANSFER_IMAGE_STOPPED_BY_SYSTEM_FAILED_TO_ADD_TICKET_TO_PROXY Error Transfer was stopped by system. Reason: failed to add image ticket to ovirt-imageio. 1071 UPLOAD_IMAGE_PAUSED_BY_SYSTEM_TIMEOUT Error Upload was paused by system. Reason: timeout due to transfer inactivity. 1072 DOWNLOAD_IMAGE_CANCELED_TIMEOUT Error Download was canceled by system. Reason: timeout due to transfer inactivity. 1073 TRANSFER_IMAGE_PAUSED_BY_USER Normal Image transfer was paused by user (USD{UserName}). 1074 TRANSFER_IMAGE_RESUMED_BY_USER Normal Image transfer was resumed by user (USD{UserName}). 1098 NETWORK_UPDATE_DISPLAY_FOR_HOST_WITH_ACTIVE_VM Warning Display Network was updated on Host USD{VdsName} with active VMs attached. The change will be applied to those VMs after their reboot. Running VMs might loose display connectivity until then. 1099 NETWORK_UPDATE_DISPLAY_FOR_CLUSTER_WITH_ACTIVE_VM Warning Display Network (USD{NetworkName}) was updated for Cluster USD{ClusterName} with active VMs attached. The change will be applied to those VMs after their reboot. 1100 NETWORK_UPDATE_DISPLAY_TO_CLUSTER Info Update Display Network (USD{NetworkName}) for Cluster USD{ClusterName}. (User: USD{UserName}) 1101 NETWORK_UPDATE_DISPLAY_TO_CLUSTER_FAILED Error Failed to update Display Network (USD{NetworkName}) for Cluster USD{ClusterName}. (User: USD{UserName}) 1102 NETWORK_UPDATE_NETWORK_TO_VDS_INTERFACE Info Update Network USD{NetworkName} in Host USD{VdsName}. (User: USD{UserName}) 1103 NETWORK_UPDATE_NETWORK_TO_VDS_INTERFACE_FAILED Error Failed to update Network USD{NetworkName} in Host USD{VdsName}. (User: USD{UserName}) 1104 NETWORK_COMMINT_NETWORK_CHANGES Info Network changes were saved on host USD{VdsName} 1105 NETWORK_COMMINT_NETWORK_CHANGES_FAILED Error Failed to commit network changes on USD{VdsName} 1106 NETWORK_HOST_USING_WRONG_CLUSER_VLAN Warning USD{VdsName} is having wrong vlan id: USD{VlanIdHost}, expected vlan id: USD{VlanIdCluster} 1107 NETWORK_HOST_MISSING_CLUSER_VLAN Warning USD{VdsName} is missing vlan id: USD{VlanIdCluster} that is expected by the cluster 1108 VDS_NETWORK_MTU_DIFFER_FROM_LOGICAL_NETWORK Info 1109 BRIDGED_NETWORK_OVER_MULTIPLE_INTERFACES Warning Bridged network USD{NetworkName} is attached to multiple interfaces: USD{Interfaces} on Host USD{VdsName}. 1110 VDS_NETWORKS_OUT_OF_SYNC Warning Host USD{VdsName}'s following network(s) are not synchronized with their Logical Network configuration: USD{Networks}. 1111 VM_MIGRATION_FAILED_DURING_MOVE_TO_MAINTENANCE_NO_DESTINATION_VDS Error Migration failedUSD{DueToMigrationError} while Source Host is in 'preparing for maintenance' state.\n Consider manual intervention\: stopping/migrating Vms as Host's state will not\n turn to maintenance while VMs are still running on it.(VM: USD{VmName}, Source: USD{VdsName}). 1112 NETWORK_UPDTAE_NETWORK_ON_CLUSTER Info Network USD{NetworkName} on Cluster USD{ClusterName} updated. 1113 NETWORK_UPDTAE_NETWORK_ON_CLUSTER_FAILED Error Failed to update Network USD{NetworkName} on Cluster USD{ClusterName}. 1114 NETWORK_UPDATE_NETWORK Info Network USD{NetworkName} was updated on Data Center: USD{StoragePoolName} 1115 NETWORK_UPDATE_NETWORK_FAILED Error Failed to update Network USD{NetworkName} on Data Center: USD{StoragePoolName} 1116 NETWORK_UPDATE_VM_INTERFACE_LINK_UP Info Link State is UP. 1117 NETWORK_UPDATE_VM_INTERFACE_LINK_DOWN Info Link State is DOWN. 1118 INVALID_BOND_INTERFACE_FOR_MANAGEMENT_NETWORK_CONFIGURATION Error Failed to configure management network on host USD{VdsName}. Host USD{VdsName} has an invalid bond interface (USD{InterfaceName} contains less than 2 active slaves) for the management network configuration. 1119 VLAN_ID_MISMATCH_FOR_MANAGEMENT_NETWORK_CONFIGURATION Error Failed to configure management network on host USD{VdsName}. Host USD{VdsName} has an interface USD{InterfaceName} for the management network configuration with VLAN-ID (USD{VlanId}), which is different from data-center definition (USD{MgmtVlanId}). 1120 SETUP_NETWORK_FAILED_FOR_MANAGEMENT_NETWORK_CONFIGURATION Error Failed to configure management network on host USD{VdsName} due to setup networks failure. 1121 PERSIST_NETWORK_FAILED_FOR_MANAGEMENT_NETWORK Warning Failed to configure management network on host USD{VdsName} due to failure in persisting the management network configuration. 1122 ADD_VNIC_PROFILE Info VM network interface profile USD{VnicProfileName} was added to network USD{NetworkName} in Data Center: USD{DataCenterName}. (User: USD{UserName}) 1123 ADD_VNIC_PROFILE_FAILED Error Failed to add VM network interface profile USD{VnicProfileName} to network USD{NetworkName} in Data Center: USD{DataCenterName} (User: USD{UserName}) 1124 UPDATE_VNIC_PROFILE Info VM network interface profile USD{VnicProfileName} was updated for network USD{NetworkName} in Data Center: USD{DataCenterName}. (User: USD{UserName}) 1125 UPDATE_VNIC_PROFILE_FAILED Error Failed to update VM network interface profile USD{VnicProfileName} for network USD{NetworkName} in Data Center: USD{DataCenterName}. (User: USD{UserName}) 1126 REMOVE_VNIC_PROFILE Info VM network interface profile USD{VnicProfileName} was removed from network USD{NetworkName} in Data Center: USD{DataCenterName}. (User: USD{UserName}) 1127 REMOVE_VNIC_PROFILE_FAILED Error Failed to remove VM network interface profile USD{VnicProfileName} from network USD{NetworkName} in Data Center: USD{DataCenterName}. (User: USD{UserName}) 1128 NETWORK_WITHOUT_INTERFACES Warning Network USD{NetworkName} is not attached to any interface on host USD{VdsName}. 1129 VNIC_PROFILE_UNSUPPORTED_FEATURES Warning VM USD{VmName} has network interface USD{NicName} which is using profile USD{VnicProfile} with unsupported feature(s) 'USD{UnsupportedFeatures}' by VM cluster USD{ClusterName} (version USD{CompatibilityVersion}). 1131 REMOVE_NETWORK_BY_LABEL_FAILED Error Network USD{Network} cannot be removed from the following hosts: USD{HostNames} in data-center USD{StoragePoolName}. 1132 LABEL_NETWORK Info Network USD{NetworkName} was labeled USD{Label} in data-center USD{StoragePoolName}. 1133 LABEL_NETWORK_FAILED Error Failed to label network USD{NetworkName} with label USD{Label} in data-center USD{StoragePoolName}. 1134 UNLABEL_NETWORK Info Network USD{NetworkName} was unlabeled in data-center USD{StoragePoolName}. 1135 UNLABEL_NETWORK_FAILED Error Failed to unlabel network USD{NetworkName} in data-center USD{StoragePoolName}. 1136 LABEL_NIC Info Network interface card USD{NicName} was labeled USD{Label} on host USD{VdsName}. 1137 LABEL_NIC_FAILED Error Failed to label network interface card USD{NicName} with label USD{Label} on host USD{VdsName}. 1138 UNLABEL_NIC Info Label USD{Label} was removed from network interface card USD{NicName} on host USD{VdsName}. 1139 UNLABEL_NIC_FAILED Error Failed to remove label USD{Label} from network interface card USD{NicName} on host USD{VdsName}. 1140 SUBNET_REMOVED Info Subnet USD{SubnetName} was removed from provider USD{ProviderName}. (User: USD{UserName}) 1141 SUBNET_REMOVAL_FAILED Error Failed to remove subnet USD{SubnetName} from provider USD{ProviderName}. (User: USD{UserName}) 1142 SUBNET_ADDED Info Subnet USD{SubnetName} was added on provider USD{ProviderName}. (User: USD{UserName}) 1143 SUBNET_ADDITION_FAILED Error Failed to add subnet USD{SubnetName} on provider USD{ProviderName}. (User: USD{UserName}) 1144 CONFIGURE_NETWORK_BY_LABELS_WHEN_CHANGING_CLUSTER_FAILED Error Failed to configure networks on host USD{VdsName} while changing its cluster. 1145 PERSIST_NETWORK_ON_HOST Info (USD{Sequence}/USD{Total}): Applying changes for network(s) USD{NetworkNames} on host USD{VdsName}. (User: USD{UserName}) 1146 PERSIST_NETWORK_ON_HOST_FINISHED Info (USD{Sequence}/USD{Total}): Successfully applied changes for network(s) USD{NetworkNames} on host USD{VdsName}. (User: USD{UserName}) 1147 PERSIST_NETWORK_ON_HOST_FAILED Error (USD{Sequence}/USD{Total}): Failed to apply changes for network(s) USD{NetworkNames} on host USD{VdsName}. (User: USD{UserName}) 1148 MULTI_UPDATE_NETWORK_NOT_POSSIBLE Warning Cannot apply network USD{NetworkName} changes to hosts on unsupported data center USD{StoragePoolName}. (User: USD{UserName}) 1149 REMOVE_PORT_FROM_EXTERNAL_PROVIDER_FAILED Warning Failed to remove vNIC USD{NicName} from external network provider USD{ProviderName}. The vNIC can be identified on the provider by device id USD{NicId}. 1150 IMPORTEXPORT_EXPORT_VM Info Vm USD{VmName} was exported successfully to USD{StorageDomainName} 1151 IMPORTEXPORT_EXPORT_VM_FAILED Error Failed to export Vm USD{VmName} to USD{StorageDomainName} 1152 IMPORTEXPORT_IMPORT_VM Info Vm USD{VmName} was imported successfully to Data Center USD{StoragePoolName}, Cluster USD{ClusterName} 1153 IMPORTEXPORT_IMPORT_VM_FAILED Error Failed to import Vm USD{VmName} to Data Center USD{StoragePoolName}, Cluster USD{ClusterName} 1154 IMPORTEXPORT_REMOVE_TEMPLATE Info Template USD{VmTemplateName} was removed from USD{StorageDomainName} 1155 IMPORTEXPORT_REMOVE_TEMPLATE_FAILED Error Failed to remove Template USD{VmTemplateName} from USD{StorageDomainName} 1156 IMPORTEXPORT_EXPORT_TEMPLATE Info Template USD{VmTemplateName} was exported successfully to USD{StorageDomainName} 1157 IMPORTEXPORT_EXPORT_TEMPLATE_FAILED Error Failed to export Template USD{VmTemplateName} to USD{StorageDomainName} 1158 IMPORTEXPORT_IMPORT_TEMPLATE Info Template USD{VmTemplateName} was imported successfully to Data Center USD{StoragePoolName}, Cluster USD{ClusterName} 1159 IMPORTEXPORT_IMPORT_TEMPLATE_FAILED Error Failed to import Template USD{VmTemplateName} to Data Center USD{StoragePoolName}, Cluster USD{ClusterName} 1160 IMPORTEXPORT_REMOVE_VM Info Vm USD{VmName} was removed from USD{StorageDomainName} 1161 IMPORTEXPORT_REMOVE_VM_FAILED Error Failed to remove Vm USD{VmName} remove from USD{StorageDomainName} 1162 IMPORTEXPORT_STARTING_EXPORT_VM Info Starting export Vm USD{VmName} to USD{StorageDomainName} 1163 IMPORTEXPORT_STARTING_IMPORT_TEMPLATE Info Starting to import Template USD{VmTemplateName} to Data Center USD{StoragePoolName}, Cluster USD{ClusterName} 1164 IMPORTEXPORT_STARTING_EXPORT_TEMPLATE Info Starting to export Template USD{VmTemplateName} to USD{StorageDomainName} 1165 IMPORTEXPORT_STARTING_IMPORT_VM Info Starting to import Vm USD{VmName} to Data Center USD{StoragePoolName}, Cluster USD{ClusterName} 1166 IMPORTEXPORT_STARTING_REMOVE_TEMPLATE Info Starting to remove Template USD{VmTemplateName} remove USD{StorageDomainName} 1167 IMPORTEXPORT_STARTING_REMOVE_VM Info Starting to remove Vm USD{VmName} remove from USD{StorageDomainName} 1168 IMPORTEXPORT_FAILED_TO_IMPORT_VM Warning Failed to read VM 'USD{ImportedVmName}' OVF, it may be corrupted. Underlying error message: USD{ErrorMessage} 1169 IMPORTEXPORT_FAILED_TO_IMPORT_TEMPLATE Warning Failed to read Template 'USD{Template}' OVF, it may be corrupted. Underlying error message: USD{ErrorMessage} 1170 IMPORTEXPORT_IMPORT_TEMPLATE_INVALID_INTERFACES Normal While importing Template USD{EntityName}, the Network/s USD{Networks} were found to be Non-VM Networks or do not exist in Cluster. Network Name was not set in the Interface/s USD{Interfaces}. 1171 USER_ACCOUNT_PASSWORD_EXPIRED Error User USD{UserName} cannot login, as the user account password has expired. Please contact the system administrator. 1172 AUTH_FAILED_INVALID_CREDENTIALS Error User USD{UserName} cannot login, please verify the username and password. 1173 AUTH_FAILED_CLOCK_SKEW_TOO_GREAT Error User USD{UserName} cannot login, the engine clock is not synchronized with directory services. Please contact the system administrator. 1174 AUTH_FAILED_NO_KDCS_FOUND Error User USD{UserName} cannot login, authentication domain cannot be found. Please contact the system administrator. 1175 AUTH_FAILED_DNS_ERROR Error User USD{UserName} cannot login, there's an error in DNS configuration. Please contact the system administrator. 1176 AUTH_FAILED_OTHER Error User USD{UserName} cannot login, unknown kerberos error. Please contact the system administrator. 1177 AUTH_FAILED_DNS_COMMUNICATION_ERROR Error User USD{UserName} cannot login, cannot lookup DNS for SRV records. Please contact the system administrator. 1178 AUTH_FAILED_CONNECTION_TIMED_OUT Error User USD{UserName} cannot login, connection to LDAP server has timed out. Please contact the system administrator. 1179 AUTH_FAILED_WRONG_REALM Error User USD{UserName} cannot login, please verify your domain name. 1180 AUTH_FAILED_CONNECTION_ERROR Error User USD{UserName} cannot login, connection refused or some configuration problems exist. Possible DNS error. Please contact the system administrator. 1181 AUTH_FAILED_CANNOT_FIND_LDAP_SERVER_FOR_DOMAIN Error User USD{UserName} cannot login, cannot find valid LDAP server for domain. Please contact the system administrator. 1182 AUTH_FAILED_NO_USER_INFORMATION_WAS_FOUND Error User USD{UserName} cannot login, no user information was found. Please contact the system administrator. 1183 AUTH_FAILED_CLIENT_NOT_FOUND_IN_KERBEROS_DATABASE Error User USD{UserName} cannot login, user was not found in domain. Please contact the system administrator. 1184 AUTH_FAILED_INTERNAL_KERBEROS_ERROR Error User USD{UserName} cannot login, an internal error has ocurred in the Kerberos implementation of the JVM. Please contact the system administrator. 1185 USER_ACCOUNT_EXPIRED Error The account for USD{UserName} got expired. Please contact the system administrator. 1186 IMPORTEXPORT_NO_PROXY_HOST_AVAILABLE_IN_DC Error No Host in Data Center 'USD{StoragePoolName}' can serve as a proxy to retrieve remote VMs information (User: USD{UserName}). 1187 IMPORTEXPORT_HOST_CANNOT_SERVE_AS_PROXY Error Host USD{VdsName} cannot be used as a proxy to retrieve remote VMs information since it is not up (User: USD{UserName}). 1188 IMPORTEXPORT_PARTIAL_VM_MISSING_ENTITIES Warning The following entities could not be verified and will not be part of the imported VM USD{VmName}: 'USD{MissingEntities}' (User: USD{UserName}). 1189 IMPORTEXPORT_IMPORT_VM_FAILED_UPDATING_OVF Error Failed to import Vm USD{VmName} to Data Center USD{StoragePoolName}, Cluster USD{ClusterName}, could not update VM data in export. 1190 USER_RESTORE_FROM_SNAPSHOT_START Info Restoring VM USD{VmName} from snapshot started by user USD{UserName}. 1191 VM_DISK_ALREADY_CHANGED Info CD USD{DiskName} is already inserted to VM USD{VmName}, disk change action was skipped. User: USD{UserName}. 1192 VM_DISK_ALREADY_EJECTED Info CD is already ejected from VM USD{VmName}, disk change action was skipped. User: USD{UserName}. 1193 IMPORTEXPORT_STARTING_CONVERT_VM Info Starting to convert Vm USD{VmName} 1194 IMPORTEXPORT_CONVERT_FAILED Info Failed to convert Vm USD{VmName} 1195 IMPORTEXPORT_CANNOT_GET_OVF Info Failed to get the configuration of converted Vm USD{VmName} 1196 IMPORTEXPORT_INVALID_OVF Info Failed to process the configuration of converted Vm USD{VmName} 1197 IMPORTEXPORT_PARTIAL_TEMPLATE_MISSING_ENTITIES Warning The following entities could not be verified and will not be part of the imported Template USD{VmTemplateName}: 'USD{MissingEntities}' (User: USD{UserName}). 1200 ENTITY_RENAMED Info USD{EntityType} USD{OldEntityName} was renamed from USD{OldEntityName} to USD{NewEntityName} by USD{UserName}. 1201 UPDATE_HOST_NIC_VFS_CONFIG Info The VFs configuration of network interface card USD{NicName} on host USD{VdsName} was updated. 1202 UPDATE_HOST_NIC_VFS_CONFIG_FAILED Error Failed to update the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1203 ADD_VFS_CONFIG_NETWORK Info Network USD{NetworkName} was added to the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1204 ADD_VFS_CONFIG_NETWORK_FAILED Info Failed to add USD{NetworkName} to the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1205 REMOVE_VFS_CONFIG_NETWORK Info Network USD{NetworkName} was removed from the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1206 REMOVE_VFS_CONFIG_NETWORK_FAILED Info Failed to remove USD{NetworkName} from the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1207 ADD_VFS_CONFIG_LABEL Info Label USD{Label} was added to the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1208 ADD_VFS_CONFIG_LABEL_FAILED Info Failed to add USD{Label} to the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1209 REMOVE_VFS_CONFIG_LABEL Info Label USD{Label} was removed from the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1210 REMOVE_VFS_CONFIG_LABEL_FAILED Info Failed to remove USD{Label} from the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1211 USER_REDUCE_DOMAIN_DEVICES_STARTED Info Started to reduce Storage USD{StorageDomainName} devices. (User: USD{UserName}). 1212 USER_REDUCE_DOMAIN_DEVICES_FAILED_METADATA_DEVICES Error Failed to reduce Storage USD{StorageDomainName}. The following devices contains the domain metadata USD{deviceIds} and can't be reduced from the domain. (User: USD{UserName}). 1213 USER_REDUCE_DOMAIN_DEVICES_FAILED Error Failed to reduce Storage USD{StorageDomainName}. (User: USD{UserName}). 1214 USER_REDUCE_DOMAIN_DEVICES_SUCCEEDED Info Storage USD{StorageDomainName} has been reduced. (User: USD{UserName}). 1215 USER_REDUCE_DOMAIN_DEVICES_FAILED_NO_FREE_SPACE Error Can't reduce Storage USD{StorageDomainName}. There is not enough space on the destination devices of the storage domain. (User: USD{UserName}). 1216 USER_REDUCE_DOMAIN_DEVICES_FAILED_TO_GET_DOMAIN_INFO Error Can't reduce Storage USD{StorageDomainName}. Failed to get the domain info. (User: USD{UserName}). 1217 CANNOT_IMPORT_VM_WITH_LEASE_COMPAT_VERSION Warning The VM USD{VmName} has a VM lease defined yet will be imported without it as the VM compatibility version does not support VM leases. 1218 CANNOT_IMPORT_VM_WITH_LEASE_STORAGE_DOMAIN Warning The VM USD{VmName} has a VM lease defined yet will be imported without it as the Storage Domain for the lease does not exist or is not active. 1219 FAILED_DETERMINE_STORAGE_DOMAIN_METADATA_DEVICES Error Failed to determine the metadata devices of Storage Domain USD{StorageDomainName}. 1220 HOT_PLUG_LEASE_FAILED Error Failed to hot plug lease to the VM USD{VmName}. The VM is running without a VM lease. 1221 HOT_UNPLUG_LEASE_FAILED Error Failed to hot unplug lease to the VM USD{VmName}. 1222 DETACH_DOMAIN_WITH_VMS_AND_TEMPLATES_LEASES Warning The deactivated domain USD{storageDomainName} contained leases for the following VMs/Templates: USD{entitiesNames}, a part of those VMs will not run and need manual removal of the VM leases. 1223 IMPORTEXPORT_STARTING_EXPORT_VM_TO_OVA Info Starting to export Vm USD{VmName} as a Virtual Appliance 1224 IMPORTEXPORT_EXPORT_VM_TO_OVA Info Vm USD{VmName} was exported successfully as a Virtual Appliance to path USD{OvaPath} on Host USD{VdsName} 1225 IMPORTEXPORT_EXPORT_VM_TO_OVA_FAILED Error Failed to export Vm USD{VmName} as a Virtual Appliance to path USD{OvaPath} on Host USD{VdsName} 1226 IMPORTEXPORT_STARTING_EXPORT_TEMPLATE_TO_OVA Info Starting to export Template USD{VmTemplateName} as a Virtual Appliance 1227 IMPORTEXPORT_EXPORT_TEMPLATE_TO_OVA Info Template USD{VmTemplateName} was exported successfully as a Virtual Appliance to path USD{OvaPath} on Host USD{VdsName} 1228 IMPORTEXPORT_EXPORT_TEMPLATE_TO_OVA_FAILED Error Failed to export Template USD{VmTemplateName} as a Virtual Appliance to path USD{OvaPath} on Host USD{VdsName} 1300 NUMA_ADD_VM_NUMA_NODE_SUCCESS Info Add VM NUMA node successfully. 1301 NUMA_ADD_VM_NUMA_NODE_FAILED Error Add VM NUMA node failed. 1310 NUMA_UPDATE_VM_NUMA_NODE_SUCCESS Info Update VM NUMA node successfully. 1311 NUMA_UPDATE_VM_NUMA_NODE_FAILED Error Update VM NUMA node failed. 1320 NUMA_REMOVE_VM_NUMA_NODE_SUCCESS Info Remove VM NUMA node successfully. 1321 NUMA_REMOVE_VM_NUMA_NODE_FAILED Error Remove VM NUMA node failed. 1322 USER_ADD_VM_TEMPLATE_CREATE_TEMPLATE_FAILURE Error Failed to create Template USD{VmTemplateName} or its disks from VM USD{VmName}. 1323 USER_ADD_VM_TEMPLATE_ASSIGN_ILLEGAL_FAILURE Error Failed preparing Template USD{VmTemplateName} for sealing (VM: USD{VmName}). 1324 USER_ADD_VM_TEMPLATE_SEAL_FAILURE Error Failed to seal Template USD{VmTemplateName} (VM: USD{VmName}). 1325 USER_SPARSIFY_IMAGE_START Info Started to sparsify USD{DiskAlias} 1326 USER_SPARSIFY_IMAGE_FINISH_SUCCESS Info USD{DiskAlias} sparsified successfully. 1327 USER_SPARSIFY_IMAGE_FINISH_FAILURE Error Failed to sparsify USD{DiskAlias}. 1328 USER_AMEND_IMAGE_START Info Started to amend USD{DiskAlias} 1329 USER_AMEND_IMAGE_FINISH_SUCCESS Info USD{DiskAlias} has been amended successfully. 1330 USER_AMEND_IMAGE_FINISH_FAILURE Error Failed to amend USD{DiskAlias}. 1340 VM_DOES_NOT_FIT_TO_SINGLE_NUMA_NODE Warning VM USD{VmName} does not fit to a single NUMA node on host USD{HostName}. This may negatively impact its performance. Consider using vNUMA and NUMA pinning for this VM. 1400 ENTITY_RENAMED_INTERNALLY Info USD{EntityType} USD{OldEntityName} was renamed from USD{OldEntityName} to USD{NewEntityName}. 1402 USER_LOGIN_ON_BEHALF_FAILED Error Failed to execute login on behalf - USD{LoginOnBehalfLogInfo}. 1403 IRS_CONFIRMED_DISK_SPACE_LOW Warning Warning, low confirmed disk space. USD{StorageDomainName} domain has USD{DiskSpace} GB of confirmed free space. 2000 USER_HOTPLUG_DISK Info VM USD{VmName} disk USD{DiskAlias} was plugged by USD{UserName}. 2001 USER_FAILED_HOTPLUG_DISK Error Failed to plug disk USD{DiskAlias} to VM USD{VmName} (User: USD{UserName}). 2002 USER_HOTUNPLUG_DISK Info VM USD{VmName} disk USD{DiskAlias} was unplugged by USD{UserName}. 2003 USER_FAILED_HOTUNPLUG_DISK Error Failed to unplug disk USD{DiskAlias} from VM USD{VmName} (User: USD{UserName}). 2004 USER_COPIED_DISK Info User USD{UserName} is copying disk USD{DiskAlias} to domain USD{StorageDomainName}. 2005 USER_FAILED_COPY_DISK Error User USD{UserName} failed to copy disk USD{DiskAlias} to domain USD{StorageDomainName}. 2006 USER_COPIED_DISK_FINISHED_SUCCESS Info User USD{UserName} finished copying disk USD{DiskAlias} to domain USD{StorageDomainName}. 2007 USER_COPIED_DISK_FINISHED_FAILURE Error User USD{UserName} finished with error copying disk USD{DiskAlias} to domain USD{StorageDomainName}. 2008 USER_MOVED_DISK Info User USD{UserName} moving disk USD{DiskAlias} to domain USD{StorageDomainName}. 2009 USER_FAILED_MOVED_VM_DISK Error User USD{UserName} failed to move disk USD{DiskAlias} to domain USD{StorageDomainName}. 2010 USER_MOVED_DISK_FINISHED_SUCCESS Info User USD{UserName} finished moving disk USD{DiskAlias} to domain USD{StorageDomainName}. 2011 USER_MOVED_DISK_FINISHED_FAILURE Error User USD{UserName} have failed to move disk USD{DiskAlias} to domain USD{StorageDomainName}. 2012 USER_FINISHED_REMOVE_DISK_NO_DOMAIN Info Disk USD{DiskAlias} was successfully removed (User USD{UserName}). 2013 USER_FINISHED_FAILED_REMOVE_DISK_NO_DOMAIN Warning Failed to remove disk USD{DiskAlias} (User USD{UserName}). 2014 USER_FINISHED_REMOVE_DISK Info Disk USD{DiskAlias} was successfully removed from domain USD{StorageDomainName} (User USD{UserName}). 2015 USER_FINISHED_FAILED_REMOVE_DISK Warning Failed to remove disk USD{DiskAlias} from storage domain USD{StorageDomainName} (User: USD{UserName}). 2016 USER_ATTACH_DISK_TO_VM Info Disk USD{DiskAlias} was successfully attached to VM USD{VmName} by USD{UserName}. 2017 USER_FAILED_ATTACH_DISK_TO_VM Error Failed to attach Disk USD{DiskAlias} to VM USD{VmName} (User: USD{UserName}). 2018 USER_DETACH_DISK_FROM_VM Info Disk USD{DiskAlias} was successfully detached from VM USD{VmName} by USD{UserName}. 2019 USER_FAILED_DETACH_DISK_FROM_VM Error Failed to detach Disk USD{DiskAlias} from VM USD{VmName} (User: USD{UserName}). 2020 USER_ADD_DISK Info Add-Disk operation of 'USD{DiskAlias}' was initiated by USD{UserName}. 2021 USER_ADD_DISK_FINISHED_SUCCESS Info The disk 'USD{DiskAlias}' was successfully added. 2022 USER_ADD_DISK_FINISHED_FAILURE Error Add-Disk operation failed to complete. 2023 USER_FAILED_ADD_DISK Error Add-Disk operation failed (User: USD{UserName}). 2024 USER_RUN_UNLOCK_ENTITY_SCRIPT Info 2025 USER_MOVE_IMAGE_GROUP_FAILED_TO_DELETE_SRC_IMAGE Warning Possible failure while deleting USD{DiskAlias} from the source Storage Domain USD{StorageDomainName} during the move operation. The Storage Domain may be manually cleaned-up from possible leftovers (User:USD{UserName}). 2026 USER_MOVE_IMAGE_GROUP_FAILED_TO_DELETE_DST_IMAGE Warning Possible failure while clearing possible leftovers of USD{DiskAlias} from the target Storage Domain USD{StorageDomainName} after the move operation failed to copy the image to it properly. The Storage Domain may be manually cleaned-up from possible leftovers (User:USD{UserName}). 2027 USER_IMPORT_IMAGE Info User USD{UserName} importing image USD{RepoImageName} to domain USD{StorageDomainName}. 2028 USER_IMPORT_IMAGE_FINISHED_SUCCESS Info User USD{UserName} successfully imported image USD{RepoImageName} to domain USD{StorageDomainName}. 2029 USER_IMPORT_IMAGE_FINISHED_FAILURE Error User USD{UserName} failed to import image USD{RepoImageName} to domain USD{StorageDomainName}. 2030 USER_EXPORT_IMAGE Info User USD{UserName} exporting image USD{RepoImageName} to domain USD{DestinationStorageDomainName}. 2031 USER_EXPORT_IMAGE_FINISHED_SUCCESS Info User USD{UserName} successfully exported image USD{RepoImageName} to domain USD{DestinationStorageDomainName}. 2032 USER_EXPORT_IMAGE_FINISHED_FAILURE Error User USD{UserName} failed to export image USD{RepoImageName} to domain USD{DestinationStorageDomainName}. 2033 HOT_SET_NUMBER_OF_CPUS Info Hotplug CPU: changed the number of CPUs on VM USD{vmName} from USD{previousNumberOfCpus} to USD{numberOfCpus} 2034 FAILED_HOT_SET_NUMBER_OF_CPUS Error Failed to hot set number of CPUS to VM USD{vmName}. Underlying error message: USD{ErrorMessage} 2035 USER_ISCSI_BOND_HOST_RESTART_WARNING Warning The following Networks has been removed from the iSCSI bond USD{IscsiBondName}: USD{NetworkNames}. for those changes to take affect, the hosts must be moved to maintenance and activated again. 2036 ADD_DISK_INTERNAL Info Add-Disk operation of 'USD{DiskAlias}' was initiated by the system. 2037 ADD_DISK_INTERNAL_FAILURE Info Add-Disk operation of 'USD{DiskAlias}' failed to complete. 2038 USER_REMOVE_DISK_INITIATED Info Removal of Disk USD{DiskAlias} from domain USD{StorageDomainName} was initiated by USD{UserName}. 2039 HOT_SET_MEMORY Info Hotset memory: changed the amount of memory on VM USD{vmName} from USD{previousMem} to USD{newMem} 2040 FAILED_HOT_SET_MEMORY Error Failed to hot set memory to VM USD{vmName}. Underlying error message: USD{ErrorMessage} 2041 DISK_PREALLOCATION_FAILED Error 2042 USER_FINISHED_REMOVE_DISK_ATTACHED_TO_VMS Info Disk USD{DiskAlias} associated to the VMs USD{VmNames} was successfully removed from domain USD{StorageDomainName} (User USD{UserName}). 2043 USER_FINISHED_REMOVE_DISK_ATTACHED_TO_VMS_NO_DOMAIN Info Disk USD{DiskAlias} associated to the VMs USD{VmNames} was successfully removed (User USD{UserName}). 2044 USER_REMOVE_DISK_ATTACHED_TO_VMS_INITIATED Info Removal of Disk USD{DiskAlias} associated to the VMs USD{VmNames} from domain USD{StorageDomainName} was initiated by USD{UserName}. 2045 USER_COPY_IMAGE_GROUP_FAILED_TO_DELETE_DST_IMAGE Warning Possible failure while clearing possible leftovers of USD{DiskAlias} from the target Storage Domain USD{StorageDomainName} after the operation failed. The Storage Domain may be manually cleaned-up from possible leftovers (User:USD{UserName}). 2046 MEMORY_HOT_UNPLUG_SUCCESSFULLY_REQUESTED Info Hot unplug of memory device (USD{deviceId}) of size USD{memoryDeviceSizeMb}MB was successfully requested on VM 'USD{vmName}'. Physical memory guaranteed updated from USD{oldMinMemoryMb}MB to USD{newMinMemoryMb}MB}. 2047 MEMORY_HOT_UNPLUG_FAILED Error Failed to hot unplug memory device (USD{deviceId}) of size USD{memoryDeviceSizeMb}MiB out of VM 'USD{vmName}': USD{errorMessage} 2048 FAILED_HOT_SET_MEMORY_NOT_DIVIDABLE Error Failed to hot plug memory to VM USD{vmName}. Amount of added memory (USD{memoryAdded}MiB) is not dividable by USD{requiredFactor}MiB. 2049 MEMORY_HOT_UNPLUG_SUCCESSFULLY_REQUESTED_PLUS_MEMORY_INFO Info Hot unplug of memory device (USD{deviceId}) of size USD{memoryDeviceSizeMb}MiB was successfully requested on VM 'USD{vmName}'. Defined Memory updated from USD{oldMemoryMb}MiB to USD{newMemoryMb}MiB. Physical memory guaranteed updated from USD{oldMinMemoryMb}MiB to USD{newMinMemoryMb}MiB. 2050 NO_MEMORY_DEVICE_TO_HOT_UNPLUG Info Defined memory can't be decreased. There are no hot plugged memory devices on VM USD{vmName}. 2051 NO_SUITABLE_MEMORY_DEVICE_TO_HOT_UNPLUG Info There is no memory device to hot unplug to satisfy request to decrement memory from USD{oldMemoryMb}MiB to USD{newMemoryMB}MiB on VM USD{vmName}. Available memory devices (decremented memory sizes): USD{memoryHotUnplugOptions}. 3000 USER_ADD_QUOTA Info Quota USD{QuotaName} has been added by USD{UserName}. 3001 USER_FAILED_ADD_QUOTA Error Failed to add Quota USD{QuotaName}. The operation was initiated by USD{UserName}. 3002 USER_UPDATE_QUOTA Info Quota USD{QuotaName} has been updated by USD{UserName}. 3003 USER_FAILED_UPDATE_QUOTA Error Failed to update Quota USD{QuotaName}. The operation was initiated by USD{UserName}.. 3004 USER_DELETE_QUOTA Info Quota USD{QuotaName} has been deleted by USD{UserName}. 3005 USER_FAILED_DELETE_QUOTA Error Failed to delete Quota USD{QuotaName}. The operation was initiated by USD{UserName}.. 3006 USER_EXCEEDED_QUOTA_CLUSTER_GRACE_LIMIT Error Cluster-Quota USD{QuotaName} limit exceeded and operation was blocked. Utilization: USD{Utilization}, Requested: USD{Requested} - Please select a different quota or contact your administrator to extend the quota. 3007 USER_EXCEEDED_QUOTA_CLUSTER_LIMIT Warning Cluster-Quota USD{QuotaName} limit exceeded and entered the grace zone. Utilization: USD{Utilization} (It is advised to select a different quota or contact your administrator to extend the quota). 3008 USER_EXCEEDED_QUOTA_CLUSTER_THRESHOLD Warning Cluster-Quota USD{QuotaName} is about to exceed. Utilization: USD{Utilization} 3009 USER_EXCEEDED_QUOTA_STORAGE_GRACE_LIMIT Error Storage-Quota USD{QuotaName} limit exceeded and operation was blocked. Utilization(used/requested): USD{CurrentStorage}%/USD{Requested}% - Please select a different quota or contact your administrator to extend the quota. 3010 USER_EXCEEDED_QUOTA_STORAGE_LIMIT Warning Storage-Quota USD{QuotaName} limit exceeded and entered the grace zone. Utilization: USD{CurrentStorage}% (It is advised to select a different quota or contact your administrator to extend the quota). 3011 USER_EXCEEDED_QUOTA_STORAGE_THRESHOLD Warning Storage-Quota USD{QuotaName} is about to exceed. Utilization: USD{CurrentStorage}% 3012 QUOTA_STORAGE_RESIZE_LOWER_THEN_CONSUMPTION Warning Storage-Quota USD{QuotaName}: the new size set for this quota is less than current disk utilization. 3013 MISSING_QUOTA_STORAGE_PARAMETERS_PERMISSIVE_MODE Warning Missing Quota for Disk, proceeding since in Permissive (Audit) mode. 3014 MISSING_QUOTA_CLUSTER_PARAMETERS_PERMISSIVE_MODE Warning Missing Quota for VM USD{VmName}, proceeding since in Permissive (Audit) mode. 3015 USER_EXCEEDED_QUOTA_CLUSTER_GRACE_LIMIT_PERMISSIVE_MODE Warning Cluster-Quota USD{QuotaName} limit exceeded, proceeding since in Permissive (Audit) mode. Utilization: USD{Utilization}, Requested: USD{Requested} - Please select a different quota or contact your administrator to extend the quota. 3016 USER_EXCEEDED_QUOTA_STORAGE_GRACE_LIMIT_PERMISSIVE_MODE Warning Storage-Quota USD{QuotaName} limit exceeded, proceeding since in Permissive (Audit) mode. Utilization(used/requested): USD{CurrentStorage}%/USD{Requested}% - Please select a different quota or contact your administrator to extend the quota. 3017 USER_IMPORT_IMAGE_AS_TEMPLATE Info User USD{UserName} importing image USD{RepoImageName} as template USD{TemplateName} to domain USD{StorageDomainName}. 3018 USER_IMPORT_IMAGE_AS_TEMPLATE_FINISHED_SUCCESS Info User USD{UserName} successfully imported image USD{RepoImageName} as template USD{TemplateName} to domain USD{StorageDomainName}. 3019 USER_IMPORT_IMAGE_AS_TEMPLATE_FINISHED_FAILURE Error User USD{UserName} failed to import image USD{RepoImageName} as template USD{TemplateName} to domain USD{StorageDomainName}. 4000 GLUSTER_VOLUME_CREATE Info Gluster Volume USD{glusterVolumeName} created on cluster USD{clusterName}. 4001 GLUSTER_VOLUME_CREATE_FAILED Error Creation of Gluster Volume USD{glusterVolumeName} failed on cluster USD{clusterName}. 4002 GLUSTER_VOLUME_OPTION_ADDED Info Volume Option USD{Key} 4003 GLUSTER_VOLUME_OPTION_SET_FAILED Error Volume Option USD{Key} 4004 GLUSTER_VOLUME_START Info Gluster Volume USD{glusterVolumeName} of cluster USD{clusterName} started. 4005 GLUSTER_VOLUME_START_FAILED Error Could not start Gluster Volume USD{glusterVolumeName} of cluster USD{clusterName}. 4006 GLUSTER_VOLUME_STOP Info Gluster Volume USD{glusterVolumeName} stopped on cluster USD{clusterName}. 4007 GLUSTER_VOLUME_STOP_FAILED Error Could not stop Gluster Volume USD{glusterVolumeName} on cluster USD{clusterName}. 4008 GLUSTER_VOLUME_OPTIONS_RESET Info Volume Option USD{Key} 4009 GLUSTER_VOLUME_OPTIONS_RESET_FAILED Error Could not reset Gluster Volume USD{glusterVolumeName} Options on cluster USD{clusterName}. 4010 GLUSTER_VOLUME_DELETE Info Gluster Volume USD{glusterVolumeName} deleted on cluster USD{clusterName}. 4011 GLUSTER_VOLUME_DELETE_FAILED Error Could not delete Gluster Volume USD{glusterVolumeName} on cluster USD{clusterName}. 4012 GLUSTER_VOLUME_REBALANCE_START Info Gluster Volume USD{glusterVolumeName} rebalance started on cluster USD{clusterName}. 4013 GLUSTER_VOLUME_REBALANCE_START_FAILED Error Could not start Gluster Volume USD{glusterVolumeName} rebalance on cluster USD{clusterName}. 4014 GLUSTER_VOLUME_REMOVE_BRICKS Info Bricks removed from Gluster Volume USD{glusterVolumeName} of cluster USD{clusterName}. 4015 GLUSTER_VOLUME_REMOVE_BRICKS_FAILED Error Could not remove bricks from Gluster Volume USD{glusterVolumeName} of cluster USD{clusterName}. 4016 GLUSTER_VOLUME_REPLACE_BRICK_FAILED Error Replace Gluster Volume USD{glusterVolumeName} Brick failed on cluster USD{clusterName} 4017 GLUSTER_VOLUME_REPLACE_BRICK_START Info Gluster Volume USD{glusterVolumeName} Replace Brick started on cluster USD{clusterName}. 4018 GLUSTER_VOLUME_REPLACE_BRICK_START_FAILED Error Could not start Gluster Volume USD{glusterVolumeName} Replace Brick on cluster USD{clusterName}. 4019 GLUSTER_VOLUME_ADD_BRICK Info USD{NoOfBricks} brick(s) added to volume USD{glusterVolumeName} of cluster USD{clusterName}. 4020 GLUSTER_VOLUME_ADD_BRICK_FAILED Error Failed to add bricks to the Gluster Volume USD{glusterVolumeName} of cluster USD{clusterName}. 4021 GLUSTER_SERVER_REMOVE_FAILED Error Failed to remove host USD{VdsName} from Cluster USD{ClusterName}. 4022 GLUSTER_VOLUME_PROFILE_START Info Gluster Volume USD{glusterVolumeName} profiling started on cluster USD{clusterName}. 4023 GLUSTER_VOLUME_PROFILE_START_FAILED Error Could not start profiling on gluster volume USD{glusterVolumeName} of cluster USD{clusterName} 4024 GLUSTER_VOLUME_PROFILE_STOP Info Gluster Volume USD{glusterVolumeName} profiling stopped on cluster USD{clusterName}. 4025 GLUSTER_VOLUME_PROFILE_STOP_FAILED Error Could not stop Profiling on gluster volume USD{glusterVolumeName} of cluster USD{clusterName}. 4026 GLUSTER_VOLUME_CREATED_FROM_CLI Warning Detected new volume USD{glusterVolumeName} on cluster USD{ClusterName}, and added it to engine DB. 4027 GLUSTER_VOLUME_DELETED_FROM_CLI Info Detected deletion of volume USD{glusterVolumeName} on cluster USD{ClusterName}, and deleted it from engine DB. 4028 GLUSTER_VOLUME_OPTION_SET_FROM_CLI Warning Detected new option USD{key} 4029 GLUSTER_VOLUME_OPTION_RESET_FROM_CLI Warning Detected option USD{key} 4030 GLUSTER_VOLUME_PROPERTIES_CHANGED_FROM_CLI Warning Detected changes in properties of volume USD{glusterVolumeName} of cluster USD{ClusterName}, and updated the same in engine DB. 4031 GLUSTER_VOLUME_BRICK_ADDED_FROM_CLI Warning Detected new brick USD{brick} on volume USD{glusterVolumeName} of cluster USD{ClusterName}, and added it to engine DB. 4032 GLUSTER_VOLUME_BRICK_REMOVED_FROM_CLI Info Detected brick USD{brick} removed from Volume USD{glusterVolumeName} of cluster USD{ClusterName}, and removed it from engine DB. 4033 GLUSTER_SERVER_REMOVED_FROM_CLI Info Detected server USD{VdsName} removed from Cluster USD{ClusterName}, and removed it from engine DB. 4034 GLUSTER_VOLUME_INFO_FAILED Error Failed to fetch gluster volume list from server USD{VdsName}. 4035 GLUSTER_COMMAND_FAILED Error Gluster command [USD{Command}] failed on server USD{Server}. 4038 GLUSTER_SERVER_REMOVE Info Host USD{VdsName} removed from Cluster USD{ClusterName}. 4039 GLUSTER_VOLUME_STARTED_FROM_CLI Warning Detected that Volume USD{glusterVolumeName} of Cluster USD{ClusterName} was started, and updated engine DB with it's new status. 4040 GLUSTER_VOLUME_STOPPED_FROM_CLI Warning Detected that Volume USD{glusterVolumeName} of Cluster USD{ClusterName} was stopped, and updated engine DB with it's new status. 4041 GLUSTER_VOLUME_OPTION_CHANGED_FROM_CLI Info Detected change in value of option USD{key} from USD{oldValue} to USD{newValue} on volume USD{glusterVolumeName} of cluster USD{ClusterName}, and updated it to engine DB. 4042 GLUSTER_HOOK_ENABLE Info Gluster Hook USD{GlusterHookName} enabled on cluster USD{ClusterName}. 4043 GLUSTER_HOOK_ENABLE_FAILED Error Failed to enable Gluster Hook USD{GlusterHookName} on cluster USD{ClusterName}. USD{FailureMessage} 4044 GLUSTER_HOOK_ENABLE_PARTIAL Warning Gluster Hook USD{GlusterHookName} enabled on some of the servers on cluster USD{ClusterName}. USD{FailureMessage} 4045 GLUSTER_HOOK_DISABLE Info Gluster Hook USD{GlusterHookName} disabled on cluster USD{ClusterName}. 4046 GLUSTER_HOOK_DISABLE_FAILED Error Failed to disable Gluster Hook USD{GlusterHookName} on cluster USD{ClusterName}. USD{FailureMessage} 4047 GLUSTER_HOOK_DISABLE_PARTIAL Warning Gluster Hook USD{GlusterHookName} disabled on some of the servers on cluster USD{ClusterName}. USD{FailureMessage} 4048 GLUSTER_HOOK_LIST_FAILED Error Failed to retrieve hook list from USD{VdsName} of Cluster USD{ClusterName}. 4049 GLUSTER_HOOK_CONFLICT_DETECTED Warning Detected conflict in hook USD{HookName} of Cluster USD{ClusterName}. 4050 GLUSTER_HOOK_DETECTED_NEW Info Detected new hook USD{HookName} in Cluster USD{ClusterName}. 4051 GLUSTER_HOOK_DETECTED_DELETE Info Detected removal of hook USD{HookName} in Cluster USD{ClusterName}. 4052 GLUSTER_VOLUME_OPTION_MODIFIED Info Volume Option USD{Key} changed to USD{Value} from USD{oldvalue} on USD{glusterVolumeName} of cluster USD{clusterName}. 4053 GLUSTER_HOOK_GETCONTENT_FAILED Error Failed to read content of hook USD{HookName} in Cluster USD{ClusterName}. 4054 GLUSTER_SERVICES_LIST_FAILED Error Could not fetch statuses of services from server USD{VdsName}. Updating statuses of all services on this server to UNKNOWN. 4055 GLUSTER_SERVICE_TYPE_ADDED_TO_CLUSTER Info Service type USD{ServiceType} was not mapped to cluster USD{ClusterName}. Mapped it now. 4056 GLUSTER_CLUSTER_SERVICE_STATUS_CHANGED Info Status of service type USD{ServiceType} changed from USD{OldStatus} to USD{NewStatus} on cluster USD{ClusterName} 4057 GLUSTER_SERVICE_ADDED_TO_SERVER Info Service USD{ServiceName} was not mapped to server USD{VdsName}. Mapped it now. 4058 GLUSTER_SERVER_SERVICE_STATUS_CHANGED Info Status of service USD{ServiceName} on server USD{VdsName} changed from USD{OldStatus} to USD{NewStatus}. Updating in engine now. 4059 GLUSTER_HOOK_UPDATED Info Gluster Hook USD{GlusterHookName} updated on conflicting servers. 4060 GLUSTER_HOOK_UPDATE_FAILED Error Failed to update Gluster Hook USD{GlusterHookName} on conflicting servers. USD{FailureMessage} 4061 GLUSTER_HOOK_ADDED Info Gluster Hook USD{GlusterHookName} added on conflicting servers. 4062 GLUSTER_HOOK_ADD_FAILED Error Failed to add Gluster Hook USD{GlusterHookName} on conflicting servers. USD{FailureMessage} 4063 GLUSTER_HOOK_REMOVED Info Gluster Hook USD{GlusterHookName} removed from all servers in cluster USD{ClusterName}. 4064 GLUSTER_HOOK_REMOVE_FAILED Error Failed to remove Gluster Hook USD{GlusterHookName} from cluster USD{ClusterName}. USD{FailureMessage} 4065 GLUSTER_HOOK_REFRESH Info Refreshed gluster hooks in Cluster USD{ClusterName}. 4066 GLUSTER_HOOK_REFRESH_FAILED Error Failed to refresh gluster hooks in Cluster USD{ClusterName}. 4067 GLUSTER_SERVICE_STARTED Info USD{servicetype} service started on host USD{VdsName} of cluster USD{ClusterName}. 4068 GLUSTER_SERVICE_START_FAILED Error Could not start USD{servicetype} service on host USD{VdsName} of cluster USD{ClusterName}. 4069 GLUSTER_SERVICE_STOPPED Info USD{servicetype} services stopped on host USD{VdsName} of cluster USD{ClusterName}. 4070 GLUSTER_SERVICE_STOP_FAILED Error Could not stop USD{servicetype} service on host USD{VdsName} of cluster USD{ClusterName}. 4071 GLUSTER_SERVICES_LIST_NOT_FETCHED Info Could not fetch list of services from USD{ServiceGroupType} named USD{ServiceGroupName}. 4072 GLUSTER_SERVICE_RESTARTED Info USD{servicetype} service re-started on host USD{VdsName} of cluster USD{ClusterName}. 4073 GLUSTER_SERVICE_RESTART_FAILED Error Could not re-start USD{servicetype} service on host USD{VdsName} of cluster USD{ClusterName}. 4074 GLUSTER_VOLUME_OPTIONS_RESET_ALL Info All Volume Options reset on USD{glusterVolumeName} of cluster USD{clusterName}. 4075 GLUSTER_HOST_UUID_NOT_FOUND Error Could not find gluster uuid of server USD{VdsName} on Cluster USD{ClusterName}. 4076 GLUSTER_VOLUME_BRICK_ADDED Info Brick [USD{brickpath}] on host [USD{servername}] added to volume [USD{glusterVolumeName}] of cluster USD{clusterName} 4077 GLUSTER_CLUSTER_SERVICE_STATUS_ADDED Info Status of service type USD{ServiceType} set to USD{NewStatus} on cluster USD{ClusterName} 4078 GLUSTER_VOLUME_REBALANCE_STOP Info Gluster Volume USD{glusterVolumeName} rebalance stopped of cluster USD{clusterName}. 4079 GLUSTER_VOLUME_REBALANCE_STOP_FAILED Error Could not stop rebalance of gluster volume USD{glusterVolumeName} of cluster USD{clusterName}. 4080 START_REMOVING_GLUSTER_VOLUME_BRICKS Info Started removing bricks from Volume USD{glusterVolumeName} of cluster USD{clusterName} 4081 START_REMOVING_GLUSTER_VOLUME_BRICKS_FAILED Error Could not start remove bricks from Volume USD{glusterVolumeName} of cluster USD{clusterName} 4082 GLUSTER_VOLUME_REMOVE_BRICKS_STOP Info Stopped removing bricks from Volume USD{glusterVolumeName} of cluster USD{clusterName} 4083 GLUSTER_VOLUME_REMOVE_BRICKS_STOP_FAILED Error Failed to stop remove bricks from Volume USD{glusterVolumeName} of cluster USD{clusterName} 4084 GLUSTER_VOLUME_REMOVE_BRICKS_COMMIT Info Gluster volume USD{glusterVolumeName} remove bricks committed on cluster USD{clusterName}. USD{NoOfBricks} brick(s) removed from volume USD{glusterVolumeName}. 4085 GLUSTER_VOLUME_REMOVE_BRICKS_COMMIT_FAILED Error Gluster volume USD{glusterVolumeName} remove bricks could not be commited on cluster USD{clusterName} 4086 GLUSTER_BRICK_STATUS_CHANGED Warning Detected change in status of brick USD{brickpath} of volume USD{glusterVolumeName} of cluster USD{clusterName} from USD{oldValue} to USD{newValue} via USD{source}. 4087 GLUSTER_VOLUME_REBALANCE_FINISHED Info USD{action} USD{status} on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4088 GLUSTER_VOLUME_MIGRATE_BRICK_DATA_FINISHED Info USD{action} USD{status} for brick(s) on volume USD{glusterVolumeName} of cluster USD{clusterName}. Please review to abort or commit. 4089 GLUSTER_VOLUME_REBALANCE_START_DETECTED_FROM_CLI Info Detected start of rebalance on volume USD{glusterVolumeName} of Cluster USD{ClusterName} from CLI. 4090 START_REMOVING_GLUSTER_VOLUME_BRICKS_DETECTED_FROM_CLI Info Detected start of brick removal for bricks USD{brick} on volume USD{glusterVolumeName} of Cluster USD{ClusterName} from CLI. 4091 GLUSTER_VOLUME_REBALANCE_NOT_FOUND_FROM_CLI Warning Could not find information for rebalance on volume USD{glusterVolumeName} of Cluster USD{ClusterName} from CLI. Marking it as unknown. 4092 REMOVE_GLUSTER_VOLUME_BRICKS_NOT_FOUND_FROM_CLI Warning Could not find information for remove brick on volume USD{glusterVolumeName} of Cluster USD{ClusterName} from CLI. Marking it as unknown. 4093 GLUSTER_VOLUME_DETAILS_REFRESH Info Refreshed details of the volume USD{glusterVolumeName} of cluster USD{clusterName}. 4094 GLUSTER_VOLUME_DETAILS_REFRESH_FAILED Error Failed to refresh the details of volume USD{glusterVolumeName} of cluster USD{clusterName}. 4095 GLUSTER_HOST_UUID_ALREADY_EXISTS Error Gluster UUID of host USD{VdsName} on Cluster USD{ClusterName} already exists. 4096 USER_FORCE_SELECTED_SPM_STOP_FAILED Error Failed to force select USD{VdsName} as the SPM due to a failure to stop the current SPM. 4097 GLUSTER_GEOREP_SESSION_DELETED_FROM_CLI Warning Detected deletion of geo-replication session USD{geoRepSessionKey} from volume USD{glusterVolumeName} of cluster USD{clusterName} 4098 GLUSTER_GEOREP_SESSION_DETECTED_FROM_CLI Warning Detected new geo-replication session USD{geoRepSessionKey} for volume USD{glusterVolumeName} of cluster USD{clusterName}. Adding it to engine. 4099 GLUSTER_GEOREP_SESSION_REFRESH Info Refreshed geo-replication sessions for volume USD{glusterVolumeName} of cluster USD{clusterName}. 4100 GLUSTER_GEOREP_SESSION_REFRESH_FAILED Error Failed to refresh geo-replication sessions for volume USD{glusterVolumeName} of cluster USD{clusterName}. 4101 GEOREP_SESSION_STOP Info Geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} has been stopped. 4102 GEOREP_SESSION_STOP_FAILED Error Failed to stop geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} 4103 GEOREP_SESSION_DELETED Info Geo-replication session deleted on volume USD{glusterVolumeName} of cluster USD{clusterName} 4104 GEOREP_SESSION_DELETE_FAILED Error Failed to delete geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} 4105 GLUSTER_GEOREP_CONFIG_SET Info Configuration USD{key} has been set to USD{value} on the geo-rep session USD{geoRepSessionKey}. 4106 GLUSTER_GEOREP_CONFIG_SET_FAILED Error Failed to set the configuration USD{key} to USD{value} on geo-rep session USD{geoRepSessionKey}. 4107 GLUSTER_GEOREP_CONFIG_LIST Info Refreshed configuration options for geo-replication session USD{geoRepSessionKey} 4108 GLUSTER_GEOREP_CONFIG_LIST_FAILED Error Failed to refresh configuration options for geo-replication session USD{geoRepSessionKey} 4109 GLUSTER_GEOREP_CONFIG_SET_DEFAULT Info Configuration of USD{key} of session USD{geoRepSessionKey} reset to its default value . 4110 GLUSTER_GEOREP_CONFIG_SET_DEFAULT_FAILED Error Failed to set USD{key} of session USD{geoRepSessionKey} to its default value. 4111 GLUSTER_VOLUME_SNAPSHOT_DELETED Info Gluster volume snapshot USD{snapname} deleted. 4112 GLUSTER_VOLUME_SNAPSHOT_DELETE_FAILED Error Failed to delete gluster volume snapshot USD{snapname}. 4113 GLUSTER_VOLUME_ALL_SNAPSHOTS_DELETED Info Deleted all the gluster volume snapshots for the volume USD{glusterVolumeName} of cluster USD{clusterName}. 4114 GLUSTER_VOLUME_ALL_SNAPSHOTS_DELETE_FAILED Error Failed to delete all the gluster volume snapshots for the volume USD{glusterVolumeName} of cluster USD{clusterName}. 4115 GLUSTER_VOLUME_SNAPSHOT_ACTIVATED Info Activated the gluster volume snapshot USD{snapname} on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4116 GLUSTER_VOLUME_SNAPSHOT_ACTIVATE_FAILED Error Failed to activate the gluster volume snapshot USD{snapname} on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4117 GLUSTER_VOLUME_SNAPSHOT_DEACTIVATED Info De-activated the gluster volume snapshot USD{snapname} on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4118 GLUSTER_VOLUME_SNAPSHOT_DEACTIVATE_FAILED Error Failed to de-activate gluster volume snapshot USD{snapname} on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4119 GLUSTER_VOLUME_SNAPSHOT_RESTORED Info Restored the volume USD{glusterVolumeName} of cluster USD{clusterName} to the state of gluster volume snapshot USD{snapname}. 4120 GLUSTER_VOLUME_SNAPSHOT_RESTORE_FAILED Error Failed to restore the volume USD{glusterVolumeName} of cluster USD{clusterName} to the state of gluster volume snapshot USD{snapname}. 4121 GLUSTER_VOLUME_SNAPSHOT_CONFIG_UPDATED Info Updated Gluster volume snapshot configuration(s). 4122 GLUSTER_VOLUME_SNAPSHOT_CONFIG_UPDATE_FAILED Error Failed to update gluster volume snapshot configuration(s). 4123 GLUSTER_VOLUME_SNAPSHOT_CONFIG_UPDATE_FAILED_PARTIALLY Error Failed to update gluster volume snapshot configuration(s) USD{failedSnapshotConfigs}. 4124 NEW_STORAGE_DEVICE_DETECTED Info Found new storage device USD{storageDevice} on host USD{VdsName}, and added it to engine DB." 4125 STORAGE_DEVICE_REMOVED_FROM_THE_HOST Info Detected deletion of storage device USD{storageDevice} on host USD{VdsName}, and deleting it from engine DB." 4126 SYNC_STORAGE_DEVICES_IN_HOST Info Manually synced the storage devices from host USD{VdsName} 4127 SYNC_STORAGE_DEVICES_IN_HOST_FAILED Error Failed to sync storage devices from host USD{VdsName} 4128 GEOREP_OPTION_SET_FROM_CLI Warning Detected new option USD{key} 4129 GEOREP_OPTION_CHANGED_FROM_CLI Warning Detected change in value of option USD{key} from USD{oldValue} to USD{value} for geo-replication session on volume USD{glusterVolumeName} of cluster USD{ClusterName}, and updated it to engine. 4130 GLUSTER_MASTER_VOLUME_STOP_FAILED_DURING_SNAPSHOT_RESTORE Error Could not stop master volume USD{glusterVolumeName} of cluster USD{clusterName} during snapshot restore. 4131 GLUSTER_MASTER_VOLUME_SNAPSHOT_RESTORE_FAILED Error Could not restore master volume USD{glusterVolumeName} of cluster USD{clusterName}. 4132 GLUSTER_VOLUME_SNAPSHOT_CREATED Info Snapshot USD{snapname} created for volume USD{glusterVolumeName} of cluster USD{clusterName}. 4133 GLUSTER_VOLUME_SNAPSHOT_CREATE_FAILED Error Could not create snapshot for volume USD{glusterVolumeName} of cluster USD{clusterName}. 4134 GLUSTER_VOLUME_SNAPSHOT_SCHEDULED Info Snapshots scheduled on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4135 GLUSTER_VOLUME_SNAPSHOT_SCHEDULE_FAILED Error Failed to schedule snapshots on the volume USD{glusterVolumeName} of cluster USD{clusterName}. 4136 GLUSTER_VOLUME_SNAPSHOT_RESCHEDULED Info Rescheduled snapshots on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4137 GLUSTER_VOLUME_SNAPSHOT_RESCHEDULE_FAILED Error Failed to reschedule snapshots on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4138 CREATE_GLUSTER_BRICK Info Brick USD{brickName} created successfully on host USD{vdsName} of cluster USD{clusterName}. 4139 CREATE_GLUSTER_BRICK_FAILED Error Failed to create brick USD{brickName} on host USD{vdsName} of cluster USD{clusterName}. 4140 GLUSTER_GEO_REP_PUB_KEY_FETCH_FAILED Error Failed to fetch public keys. 4141 GLUSTER_GET_PUB_KEY Info Public key fetched. 4142 GLUSTER_GEOREP_PUBLIC_KEY_WRITE_FAILED Error Failed to write public keys to USD{VdsName} 4143 GLUSTER_WRITE_PUB_KEYS Info Public keys written to USD{VdsName} 4144 GLUSTER_GEOREP_SETUP_MOUNT_BROKER_FAILED Error Failed to setup geo-replication mount broker for user USD{geoRepUserName} on the slave volume USD{geoRepSlaveVolumeName}. 4145 GLUSTER_SETUP_GEOREP_MOUNT_BROKER Info Geo-replication mount broker has been setup for user USD{geoRepUserName} on the slave volume USD{geoRepSlaveVolumeName}. 4146 GLUSTER_GEOREP_SESSION_CREATE_FAILED Error Failed to create geo-replication session between master volume : USD{glusterVolumeName} of cluster USD{clusterName} and slave volume : USD{geoRepSlaveVolumeName} for the user USD{geoRepUserName}. 4147 CREATE_GLUSTER_VOLUME_GEOREP_SESSION Info Created geo-replication session between master volume : USD{glusterVolumeName} of cluster USD{clusterName} and slave volume : USD{geoRepSlaveVolumeName} for the user USD{geoRepUserName}. 4148 GLUSTER_VOLUME_SNAPSHOT_SOFT_LIMIT_REACHED Info Gluster Volume Snapshot soft limit reached for the volume USD{glusterVolumeName} on cluster USD{clusterName}. 4149 HOST_FEATURES_INCOMPATIBILE_WITH_CLUSTER Error Host USD{VdsName} does not comply with the list of features supported by cluster USD{ClusterName}. USD{UnSupportedFeature} is not supported by the Host 4150 GLUSTER_VOLUME_SNAPSHOT_SCHEDULE_DELETED Info Snapshot schedule deleted for volume USD{glusterVolumeName} of USD{clusterName}. 4151 GLUSTER_BRICK_STATUS_DOWN Info Status of brick USD{brickpath} of volume USD{glusterVolumeName} on cluster USD{ClusterName} is down. 4152 GLUSTER_VOLUME_SNAPSHOT_DETECTED_NEW Info Found new gluster volume snapshot USD{snapname} for volume USD{glusterVolumeName} on cluster USD{ClusterName}, and added it to engine DB." 4153 GLUSTER_VOLUME_SNAPSHOT_DELETED_FROM_CLI Info Detected deletion of gluster volume snapshot USD{snapname} for volume USD{glusterVolumeName} on cluster USD{ClusterName}, and deleting it from engine DB." 4154 GLUSTER_VOLUME_SNAPSHOT_CLUSTER_CONFIG_DETECTED_NEW Info Found new gluster volume snapshot configuration USD{snapConfigName} with value USD{snapConfigValue} on cluster USD{ClusterName}, and added it to engine DB." 4155 GLUSTER_VOLUME_SNAPSHOT_VOLUME_CONFIG_DETECTED_NEW Info Found new gluster volume snapshot configuration USD{snapConfigName} with value USD{snapConfigValue} for volume USD{glusterVolumeName} on cluster USD{ClusterName}, and added it to engine DB." 4156 GLUSTER_VOLUME_SNAPSHOT_HARD_LIMIT_REACHED Info Gluster Volume Snapshot hard limit reached for the volume USD{glusterVolumeName} on cluster USD{clusterName}. 4157 GLUSTER_CLI_SNAPSHOT_SCHEDULE_DISABLE_FAILED Error Failed to disable gluster CLI based snapshot schedule on cluster USD{clusterName}. 4158 GLUSTER_CLI_SNAPSHOT_SCHEDULE_DISABLED Info Disabled gluster CLI based scheduling successfully on cluster USD{clusterName}. 4159 SET_UP_PASSWORDLESS_SSH Info Password-less SSH has been setup for user USD{geoRepUserName} on the nodes of remote volume USD{geoRepSlaveVolumeName} from the nodes of the volume USD{glusterVolumeName}. 4160 SET_UP_PASSWORDLESS_SSH_FAILED Error Failed to setup Passwordless ssh for user USD{geoRepUserName} on the nodes of remote volume USD{geoRepSlaveVolumeName} from the nodes of the volume USD{glusterVolumeName}. 4161 GLUSTER_VOLUME_TYPE_UNSUPPORTED Warning Detected a volume USD{glusterVolumeName} with type USD{glusterVolumeType} on cluster USD{Cluster} and it is not fully supported by engine. 4162 GLUSTER_VOLUME_BRICK_REPLACED Info Replaced brick 'USD{brick}' with new brick 'USD{newBrick}' of Gluster Volume USD{glusterVolumeName} on cluster USD{clusterName} 4163 GLUSTER_SERVER_STATUS_DISCONNECTED Info Gluster server USD{vdsName} set to DISCONNECTED on cluster USD{clusterName}. 4164 GLUSTER_STORAGE_DOMAIN_SYNC_FAILED Info Failed to synchronize data from storage domain USD{storageDomainName} to remote location. 4165 GLUSTER_STORAGE_DOMAIN_SYNCED Info Successfully synchronized data from storage domain USD{storageDomainName} to remote location. 4166 GLUSTER_STORAGE_DOMAIN_SYNC_STARTED Info Successfully started data synchronization data from storage domain USD{storageDomainName} to remote location. 4167 STORAGE_DOMAIN_DR_DELETED Error Deleted the data synchronization schedule for storage domain USD{storageDomainName} as the underlying geo-replication session USD{geoRepSessionKey} has been deleted. 4168 GLUSTER_WEBHOOK_ADDED Info Added webhook on USD{clusterName} 4169 GLUSTER_WEBHOOK_ADD_FAILED Error Failed to add webhook on USD{clusterName} 4170 GLUSTER_VOLUME_RESET_BRICK_FAILED Error 4171 GLUSTER_VOLUME_BRICK_RESETED Info 4172 GLUSTER_VOLUME_CONFIRMED_SPACE_LOW Warning Warning! Low confirmed free space on gluster volume USD{glusterVolumeName} 4436 GLUSTER_SERVER_ADD_FAILED Error Failed to add host USD{VdsName} into Cluster USD{ClusterName}. USD{ErrorMessage} 4437 GLUSTER_SERVERS_LIST_FAILED Error Failed to fetch gluster peer list from server USD{VdsName} on Cluster USD{ClusterName}. USD{ErrorMessage} 4595 GLUSTER_VOLUME_GEO_REP_START_FAILED_EXCEPTION Error Failed to start geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} 4596 GLUSTER_VOLUME_GEO_REP_START Info Geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} has been started. 4597 GLUSTER_VOLUME_GEO_REP_PAUSE_FAILED Error Failed to pause geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} 4598 GLUSTER_VOLUME_GEO_REP_RESUME_FAILED Error Failed to resume geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} 4599 GLUSTER_VOLUME_GEO_REP_RESUME Info Geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} has been resumed. 4600 GLUSTER_VOLUME_GEO_REP_PAUSE Info Geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} has been paused. 9000 VDS_ALERT_FENCE_IS_NOT_CONFIGURED Info Failed to verify Power Management configuration for Host USD{VdsName}. 9001 VDS_ALERT_FENCE_TEST_FAILED Info Power Management test failed for Host USD{VdsName}.USD{Reason} 9002 VDS_ALERT_FENCE_OPERATION_FAILED Info Failed to power fence host USD{VdsName}. Please check the host status and it's power management settings, and then manually reboot it and click "Confirm Host Has Been Rebooted" 9003 VDS_ALERT_FENCE_OPERATION_SKIPPED Info Host USD{VdsName} became non responsive. Fence operation skipped as the system is still initializing and this is not a host where hosted engine was running on previously. 9004 VDS_ALERT_FENCE_NO_PROXY_HOST Info There is no other host in the data center that can be used to test the power management settings. 9005 VDS_ALERT_FENCE_STATUS_VERIFICATION_FAILED Info Failed to verify Host USD{Host} USD{Status} status, Please USD{Status} Host USD{Host} manually. 9006 CANNOT_HIBERNATE_RUNNING_VMS_AFTER_CLUSTER_CPU_UPGRADE Warning Hibernation of VMs after CPU upgrade of Cluster USD{Cluster} is not supported. Please stop and restart those VMs in case you wish to hibernate them 9007 VDS_ALERT_SECONDARY_AGENT_USED_FOR_FENCE_OPERATION Info Secondary fence agent was used to USD{Operation} Host USD{VdsName} 9008 VDS_HOST_NOT_RESPONDING_CONNECTING Warning Host USD{VdsName} is not responding. It will stay in Connecting state for a grace period of USD{Seconds} seconds and after that an attempt to fence the host will be issued. 9009 VDS_ALERT_PM_HEALTH_CHECK_FENCE_AGENT_NON_RESPONSIVE Info Health check on Host USD{VdsName} indicates that Fence-Agent USD{AgentId} is non-responsive. 9010 VDS_ALERT_PM_HEALTH_CHECK_START_MIGHT_FAIL Info Health check on Host USD{VdsName} indicates that future attempts to Start this host using Power-Management are expected to fail. 9011 VDS_ALERT_PM_HEALTH_CHECK_STOP_MIGHT_FAIL Info Health check on Host USD{VdsName} indicates that future attempts to Stop this host using Power-Management are expected to fail. 9012 VDS_ALERT_PM_HEALTH_CHECK_RESTART_MIGHT_FAIL Info Health check on Host USD{VdsName} indicates that future attempts to Restart this host using Power-Management are expected to fail. 9013 VDS_ALERT_FENCE_OPERATION_SKIPPED_BROKEN_CONNECTIVITY Info Host USD{VdsName} became non responsive and was not restarted due to Fencing Policy: USD{Percents} percents of the Hosts in the Cluster have connectivity issues. 9014 VDS_ALERT_NOT_RESTARTED_DUE_TO_POLICY Info Host USD{VdsName} became non responsive and was not restarted due to the Cluster Fencing Policy. 9015 VDS_ALERT_FENCE_DISABLED_BY_CLUSTER_POLICY Info Host USD{VdsName} became Non Responsive and was not restarted due to disabled fencing in the Cluster Fencing Policy. 9016 FENCE_DISABLED_IN_CLUSTER_POLICY Info Fencing is disabled in Fencing Policy of the Cluster USD{ClusterName}, so HA VMs running on a non-responsive host will not be restarted elsewhere. 9017 FENCE_OPERATION_STARTED Info Power management USD{Action} of Host USD{VdsName} initiated. 9018 FENCE_OPERATION_SUCCEEDED Info Power management USD{Action} of Host USD{VdsName} succeeded. 9019 FENCE_OPERATION_FAILED Error Power management USD{Action} of Host USD{VdsName} failed. 9020 FENCE_OPERATION_USING_AGENT_AND_PROXY_STARTED Info Executing power management USD{Action} on Host USD{Host} using Proxy Host USD{ProxyHost} and Fence Agent USD{AgentType}:USD{AgentIp}. 9021 FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED Warning Execution of power management USD{Action} on Host USD{Host} using Proxy Host USD{ProxyHost} and Fence Agent USD{AgentType}:USD{AgentIp} failed. 9022 ENGINE_NO_FULL_BACKUP Info There is no full backup available, please run engine-backup to prevent data loss in case of corruption. 9023 ENGINE_NO_WARM_BACKUP Info Full backup was created on USD{Date} and it's too old. Please run engine-backup to prevent data loss in case of corruption. 9024 ENGINE_BACKUP_STARTED Normal Engine backup started. 9025 ENGINE_BACKUP_COMPLETED Normal Engine backup completed successfully. 9026 ENGINE_BACKUP_FAILED Error Engine backup failed. 9028 VDS_ALERT_NO_PM_CONFIG_FENCE_OPERATION_SKIPPED Info Host USD{VdsName} became non responsive. It has no power management configured. Please check the host status, manually reboot it, and click "Confirm Host Has Been Rebooted" 9500 TASK_STOPPING_ASYNC_TASK Info Stopping async task USD{CommandName} that started at USD{Date} 9501 TASK_CLEARING_ASYNC_TASK Info Clearing asynchronous task USD{CommandName} that started at USD{Date} 9506 USER_ACTIVATE_STORAGE_DOMAIN_FAILED_ASYNC Warning Failed to autorecover Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}). 9600 IMPORTEXPORT_IMPORT_VM_INVALID_INTERFACES Warning While importing VM USD{EntityName}, the Network/s USD{Networks} were found to be Non-VM Networks or do not exist in Cluster or are missing a suitable VM network interface profile. Network Name was not set in the Interface/s USD{Interfaces}. 9601 VDS_SET_NON_OPERATIONAL_VM_NETWORK_IS_BRIDGELESS Warning Host USD{VdsName} does not comply with the cluster USD{ClusterName} networks, the following VM networks are non-VM networks: 'USD{Networks}'. The host will become NonOperational. 9602 HA_VM_FAILED Error Highly Available VM USD{VmName} failed. It will be restarted automatically. 9603 HA_VM_RESTART_FAILED Error Restart of the Highly Available VM USD{VmName} failed. 9604 EMULATED_MACHINES_INCOMPATIBLE_WITH_CLUSTER Warning Host USD{VdsName} does not comply with the cluster USD{ClusterName} emulated machine. The cluster emulated machine is USD{clusterEmulatedMachines} and the host emulated machines are USD{hostSupportedEmulatedMachines}. 9605 EXCEEDED_MAXIMUM_NUM_OF_RESTART_HA_VM_ATTEMPTS Error Highly Available VM USD{VmName} could not be restarted automatically, exceeded the maximum number of attempts. 9606 IMPORTEXPORT_SNAPSHOT_VM_INVALID_INTERFACES Warning While previewing a snapshot of VM USD{EntityName}, the Network/s USD{Networks} were found to be Non-VM Networks or do not exist in Cluster. Network Name was not set in the Interface/s USD{Interfaces}. 9607 ADD_VM_FROM_SNAPSHOT_INVALID_INTERFACES Warning While adding vm USD{EntityName} from snapshot, the Network/s USD{Networks} were found to be Non-VM Networks or do not exist in Cluster. Network Name was not set in the Interface/s USD{Interfaces}. 9608 RNG_SOURCES_INCOMPATIBLE_WITH_CLUSTER Warning Host USD{VdsName} does not comply with the cluster USD{ClusterName} Random Number Generator sources. The Hosts supported sources are: USD{hostSupportedRngSources}; and the cluster requirements are: USD{clusterRequiredRngSources}. 9609 EMULATED_MACHINES_INCOMPATIBLE_WITH_CLUSTER_LEVEL Warning Host USD{VdsName} does not comply with the cluster USD{ClusterName} emulated machines. The current cluster compatibility level supports USD{clusterEmulatedMachines} and the host emulated machines are USD{hostSupportedEmulatedMachines}. 9610 MIXING_RHEL_VERSIONS_IN_CLUSTER Warning Not possible to mix RHEL 6.x and 7.x hosts in one cluster. Tried adding USD{addingRhel} host to a cluster with USD{previousRhel} hosts. 9611 COLD_REBOOT_VM_DOWN Info VM USD{VmName} is down as a part of cold reboot process 9612 COLD_REBOOT_FAILED Error Cold reboot of VM USD{VmName} failed 9613 EXCEEDED_MAXIMUM_NUM_OF_COLD_REBOOT_VM_ATTEMPTS Error VM USD{VmName} could not be rebooted, exceeded the maximum number of attempts. 9700 DWH_STARTED Info ETL Service started. 9701 DWH_STOPPED Info ETL Service stopped. 9704 DWH_ERROR Error Error in ETL Service. 9801 EXTERNAL_EVENT_NORMAL Info An external event with NORMAL severity has been added. 9802 EXTERNAL_EVENT_WARNING Warning An external event with WARNING severity has been added. 9803 EXTERNAL_EVENT_ERROR Error An external event with ERROR severity has been added. 9804 EXTERNAL_ALERT Info An external event with ALERT severity has been added. 9901 WATCHDOG_EVENT Warning Watchdog event (USD{wdaction}) triggered on USD{VmName} at USD{wdevent} (host time). 9910 USER_ADD_CLUSTER_POLICY Info Scheduling Policy USD{ClusterPolicy} was added. (User: USD{UserName}) 9911 USER_FAILED_TO_ADD_CLUSTER_POLICY Error Failed to add Scheduling Policy: USD{ClusterPolicy}. (User: USD{UserName}) 9912 USER_UPDATE_CLUSTER_POLICY Info Scheduling Policy USD{ClusterPolicy} was updated. (User: USD{UserName}) 9913 USER_FAILED_TO_UPDATE_CLUSTER_POLICY Error Failed to update Scheduling Policy: USD{ClusterPolicy}. (User: USD{UserName}) 9914 USER_REMOVE_CLUSTER_POLICY Info Scheduling Policy USD{ClusterPolicy} was removed. (User: USD{UserName}) 9915 USER_FAILED_TO_REMOVE_CLUSTER_POLICY Error Failed to remove Scheduling Policy: USD{ClusterPolicy}. (User: USD{UserName}) 9920 FAILED_TO_CONNECT_TO_SCHEDULER_PROXY Error Failed to connect to external scheduler proxy. External filters, scoring functions and load balancing will not be performed. 10000 VDS_UNTRUSTED Error Host USD{VdsName} was set to non-operational. Host is not trusted by the attestation service. 10001 USER_UPDATE_VM_FROM_TRUSTED_TO_UNTRUSTED Warning The VM USD{VmName} was updated from trusted cluster to non-trusted cluster. 10002 USER_UPDATE_VM_FROM_UNTRUSTED_TO_TRUSTED Warning The VM USD{VmName} was updated from non-trusted cluster to trusted cluster. 10003 IMPORTEXPORT_IMPORT_VM_FROM_TRUSTED_TO_UNTRUSTED Warning The VM USD{VmName} was created in trusted cluster and imported into a non-trusted cluster 10004 IMPORTEXPORT_IMPORT_VM_FROM_UNTRUSTED_TO_TRUSTED Warning The VM USD{VmName} was created in non-trusted cluster and imported into a trusted cluster 10005 USER_ADD_VM_FROM_TRUSTED_TO_UNTRUSTED Warning The VM USD{VmName} was created in an untrusted cluster. It was originated from the Template USD{VmTemplateName} which was created in a trusted cluster. 10006 USER_ADD_VM_FROM_UNTRUSTED_TO_TRUSTED Warning The VM USD{VmName} was created in a trusted cluster. It was originated from the Template USD{VmTemplateName} which was created in an untrusted cluster. 10007 IMPORTEXPORT_IMPORT_TEMPLATE_FROM_TRUSTED_TO_UNTRUSTED Warning The Template USD{VmTemplateName} was created in trusted cluster and imported into a non-trusted cluster 10008 IMPORTEXPORT_IMPORT_TEMPLATE_FROM_UNTRUSTED_TO_TRUSTED Warning The Template USD{VmTemplateName} was created in non-trusted cluster and imported into a trusted cluster 10009 USER_ADD_VM_TEMPLATE_FROM_TRUSTED_TO_UNTRUSTED Warning The non-trusted Template USD{VmTemplateName} was created from trusted Vm USD{VmName}. 10010 USER_ADD_VM_TEMPLATE_FROM_UNTRUSTED_TO_TRUSTED Warning The trusted template USD{VmTemplateName} was created from non-trusted Vm USD{VmName}. 10011 USER_UPDATE_VM_TEMPLATE_FROM_TRUSTED_TO_UNTRUSTED Warning The Template USD{VmTemplateName} was updated from trusted cluster to non-trusted cluster. 10012 USER_UPDATE_VM_TEMPLATE_FROM_UNTRUSTED_TO_TRUSTED Warning The Template USD{VmTemplateName} was updated from non-trusted cluster to trusted cluster. 10013 IMPORTEXPORT_GET_EXTERNAL_VMS_NOT_IN_DOWN_STATUS Warning The following VMs retrieved from external server USD{URL} are not in down status: USD{Vms}. 10100 USER_ADDED_NETWORK_QOS Info Network QoS USD{QosName} was added. (User: USD{UserName}) 10101 USER_FAILED_TO_ADD_NETWORK_QOS Error Failed to add Network QoS USD{QosName}. (User: USD{UserName}) 10102 USER_REMOVED_NETWORK_QOS Info Network QoS USD{QosName} was removed. (User: USD{UserName}) 10103 USER_FAILED_TO_REMOVE_NETWORK_QOS Error Failed to remove Network QoS USD{QosName}. (User: USD{UserName}) 10104 USER_UPDATED_NETWORK_QOS Info Network QoS USD{QosName} was updated. (User: USD{UserName}) 10105 USER_FAILED_TO_UPDATE_NETWORK_QOS Error Failed to update Network QoS USD{QosName}. (User: USD{UserName}) 10110 USER_ADDED_QOS Info QoS USD{QoSName} was added. (User: USD{UserName}) 10111 USER_FAILED_TO_ADD_QOS Error Failed to add QoS USD{QoSName}. (User: USD{UserName}) 10112 USER_REMOVED_QOS Info QoS USD{QoSName} was removed. (User: USD{UserName}) 10113 USER_FAILED_TO_REMOVE_QOS Error Failed to remove QoS USD{QoSName}. (User: USD{UserName}) 10114 USER_UPDATED_QOS Info QoS USD{QoSName} was updated. (User: USD{UserName}) 10115 USER_FAILED_TO_UPDATE_QOS Error Failed to update QoS USD{QoSName}. (User: USD{UserName}) 10120 USER_ADDED_DISK_PROFILE Info Disk Profile USD{ProfileName} was successfully added (User: USD{UserName}). 10121 USER_FAILED_TO_ADD_DISK_PROFILE Error Failed to add Disk Profile (User: USD{UserName}). 10122 USER_REMOVED_DISK_PROFILE Info Disk Profile USD{ProfileName} was successfully removed (User: USD{UserName}). 10123 USER_FAILED_TO_REMOVE_DISK_PROFILE Error Failed to remove Disk Profile USD{ProfileName} (User: USD{UserName}). 10124 USER_UPDATED_DISK_PROFILE Info Disk Profile USD{ProfileName} was successfully updated (User: USD{UserName}). 10125 USER_FAILED_TO_UPDATE_DISK_PROFILE Error Failed to update Disk Profile USD{ProfileName} (User: USD{UserName}). 10130 USER_ADDED_CPU_PROFILE Info CPU Profile USD{ProfileName} was successfully added (User: USD{UserName}). 10131 USER_FAILED_TO_ADD_CPU_PROFILE Error Failed to add CPU Profile (User: USD{UserName}). 10132 USER_REMOVED_CPU_PROFILE Info CPU Profile USD{ProfileName} was successfully removed (User: USD{UserName}). 10133 USER_FAILED_TO_REMOVE_CPU_PROFILE Error Failed to remove CPU Profile USD{ProfileName} (User: USD{UserName}). 10134 USER_UPDATED_CPU_PROFILE Info CPU Profile USD{ProfileName} was successfully updated (User: USD{UserName}). 10135 USER_FAILED_TO_UPDATE_CPU_PROFILE Error Failed to update CPU Profile USD{ProfileName} (User: USD{UserName}). 10200 USER_UPDATED_MOM_POLICIES Info Mom policy was updated on host USD{VdsName}. 10201 USER_FAILED_TO_UPDATE_MOM_POLICIES Warning Mom policy could not be updated on host USD{VdsName}. 10250 PM_POLICY_UP_TO_MAINTENANCE Info Host USD{Host} is not currently needed, activating maintenance mode in preparation for shutdown. 10251 PM_POLICY_MAINTENANCE_TO_DOWN Info Host USD{Host} is not currently needed, shutting down. 10252 PM_POLICY_TO_UP Info Reactivating host USD{Host} according to the current power management policy. 10300 CLUSTER_ALERT_HA_RESERVATION Info Cluster USD{ClusterName} failed the HA Reservation check, HA VMs on host(s): USD{Hosts} will fail to migrate in case of a failover, consider adding resources or shutting down unused VMs. 10301 CLUSTER_ALERT_HA_RESERVATION_DOWN Info Cluster USD{ClusterName} passed the HA Reservation check. 10350 USER_ADDED_AFFINITY_GROUP Info Affinity Group USD{affinityGroupName} was added. (User: USD{UserName}) 10351 USER_FAILED_TO_ADD_AFFINITY_GROUP Error Failed to add Affinity Group USD{affinityGroupName}. (User: USD{UserName}) 10352 USER_UPDATED_AFFINITY_GROUP Info Affinity Group USD{affinityGroupName} was updated. (User: USD{UserName}) 10353 USER_FAILED_TO_UPDATE_AFFINITY_GROUP Error Failed to update Affinity Group USD{affinityGroupName}. (User: USD{UserName}) 10354 USER_REMOVED_AFFINITY_GROUP Info Affinity Group USD{affinityGroupName} was removed. (User: USD{UserName}) 10355 USER_FAILED_TO_REMOVE_AFFINITY_GROUP Error Failed to remove Affinity Group USD{affinityGroupName}. (User: USD{UserName}) 10356 VM_TO_HOST_CONFLICT_IN_ENFORCING_POSITIVE_AND_NEGATIVE_AFFINITY Error The affinity groups: USD{AffinityGroups}, with hosts :USD{Hosts} and VMs : USD{Vms}, have VM to host conflicts between positive and negative enforcing affinity groups. 10357 VM_TO_HOST_CONFLICT_IN_POSITIVE_AND_NEGATIVE_AFFINITY Warning The affinity groups: USD{AffinityGroups}, with hosts: USD{Hosts} and VMs: USD{Vms}, have VM to host conflicts between positive and negative affinity groups. 10358 VM_TO_HOST_CONFLICTS_POSITIVE_VM_TO_VM_AFFINITY Warning The affinity groups: USD{AffinityGroups}, with hosts : USD{Hosts} and VMs: USD{Vms}, have conflicts between VM to host affinity and VM to VM positive affinity. 10359 VM_TO_HOST_CONFLICTS_NEGATIVE_VM_TO_VM_AFFINITY Warning The affinity groups: USD{AffinityGroups}, with hosts : USD{Hosts} and VMs: USD{Vms}, have conflicts between VM to host affinity and VM to VM negative affinity. 10360 NON_INTERSECTING_POSITIVE_HOSTS_AFFINITY_CONFLICTS Warning The affinity groups: USD{AffinityGroups}, with hosts : USD{Hosts} and VMs : USD{Vms} , have non intersecting positive hosts conflicts. 10361 VM_TO_VM_AFFINITY_CONFLICTS Error 10380 USER_ADDED_AFFINITY_LABEL Info Affinity Label USD{labelName} was added. (User: USD{UserName}) 10381 USER_FAILED_TO_ADD_AFFINITY_LABEL Error Failed to add Affinity Label USD{labelName}. (User: USD{UserName}) 10382 USER_UPDATED_AFFINITY_LABEL Info Affinity Label USD{labelName} was updated. (User: USD{UserName}) 10383 USER_FAILED_TO_UPDATE_AFFINITY_LABEL Error Failed to update Affinity Label USD{labelName}. (User: USD{UserName}) 10384 USER_REMOVED_AFFINITY_LABEL Info Affinity Label USD{labelName} was removed. (User: USD{UserName}) 10385 USER_FAILED_TO_REMOVE_AFFINITY_LABEL Error Failed to remove Affinity Label USD{labelName}. (User: USD{UserName}) 10400 ISCSI_BOND_ADD_SUCCESS Info iSCSI bond 'USD{IscsiBondName}' was successfully created in Data Center 'USD{StoragePoolName}'. 10401 ISCSI_BOND_ADD_FAILED Error Failed to create iSCSI bond 'USD{IscsiBondName}' in Data Center 'USD{StoragePoolName}'. 10402 ISCSI_BOND_EDIT_SUCCESS Info iSCSI bond 'USD{IscsiBondName}' was successfully updated. 10403 ISCSI_BOND_EDIT_FAILED Error Failed to update iSCSI bond 'USD{IscsiBondName}'. 10404 ISCSI_BOND_REMOVE_SUCCESS Info iSCSI bond 'USD{IscsiBondName}' was removed from Data Center 'USD{StoragePoolName}' 10405 ISCSI_BOND_REMOVE_FAILED Error Failed to remove iSCSI bond 'USD{IscsiBondName}' from Data Center 'USD{StoragePoolName}' 10406 ISCSI_BOND_EDIT_SUCCESS_WITH_WARNING Warning iSCSI bond 'USD{IscsiBondName}' was successfully updated but some of the hosts encountered connection issues. 10407 ISCSI_BOND_ADD_SUCCESS_WITH_WARNING Warning iSCSI bond 'USD{IscsiBondName}' was successfully created in Data Center 'USD{StoragePoolName}' but some of the hosts encountered connection issues. 10450 USER_SET_HOSTED_ENGINE_MAINTENANCE Info Hosted Engine HA maintenance mode was updated on host USD{VdsName}. 10451 USER_FAILED_TO_SET_HOSTED_ENGINE_MAINTENANCE Error Hosted Engine HA maintenance mode could not be updated on host USD{VdsName}. 10452 VDS_MAINTENANCE_MANUAL_HA Warning Host USD{VdsName} was switched to Maintenance mode, but Hosted Engine HA maintenance could not be enabled. Please enable it manually. 10453 USER_VDS_MAINTENANCE_MANUAL_HA Warning Host USD{VdsName} was switched to Maintenance mode by USD{UserName}, but Hosted Engine HA maintenance could not be enabled. Please enable it manually. 10454 VDS_ACTIVATE_MANUAL_HA Warning Host USD{VdsName} was activated by USD{UserName}, but the Hosted Engine HA service may still be in maintenance mode. If necessary, please correct this manually. 10455 VDS_ACTIVATE_MANUAL_HA_ASYNC Warning Host USD{VdsName} was autorecovered, but the Hosted Engine HA service may still be in maintenance mode. If necessary, please correct this manually. 10456 HOSTED_ENGINE_VM_IMPORT_SUCCEEDED Normal Hosted Engine VM was imported successfully 10460 HOSTED_ENGINE_DOMAIN_IMPORT_SUCCEEDED Normal Hosted Engine Storage Domain imported successfully 10461 HOSTED_ENGINE_DOMAIN_IMPORT_FAILED Error Failed to import the Hosted Engine Storage Domain 10500 EXTERNAL_SCHEDULER_PLUGIN_ERROR Error Running the external scheduler plugin 'USD{PluginName}' failed: 'USD{ErrorMessage}' 10501 EXTERNAL_SCHEDULER_ERROR Error Running the external scheduler failed: 'USD{ErrorMessage}' 10550 VM_SLA_POLICY_CPU Info VM USD{VmName} SLA Policy was set. CPU limit is set to USD{cpuLimit} 10551 VM_SLA_POLICY_STORAGE Info VM USD{VmName} SLA Policy was set. Storage policy changed for disks: [USD{diskList}] 10552 VM_SLA_POLICY_CPU_STORAGE Info VM USD{VmName} SLA Policy was set. CPU limit is set to USD{cpuLimit}. Storage policy changed for disks: [USD{diskList}] 10553 FAILED_VM_SLA_POLICY Error Failed to set SLA Policy to VM USD{VmName}. Underlying error message: USD{ErrorMessage} 10600 USER_REMOVE_AUDIT_LOG Info Event list message USD{AuditLogId} was removed by User USD{UserName}. 10601 USER_REMOVE_AUDIT_LOG_FAILED Error User USD{UserName} failed to remove event list message USD{AuditLogId}. 10602 USER_CLEAR_ALL_AUDIT_LOG_EVENTS Info All events were removed. (User: USD{UserName}) 10603 USER_CLEAR_ALL_AUDIT_LOG_EVENTS_FAILED Error Failed to remove all events. (User: USD{UserName}) 10604 USER_DISPLAY_ALL_AUDIT_LOG Info All events were displayed. (User: USD{UserName}) 10605 USER_DISPLAY_ALL_AUDIT_LOG_FAILED Error Failed to display all events. (User: USD{UserName}) 10606 USER_CLEAR_ALL_AUDIT_LOG_ALERTS Info All alerts were removed. (User: USD{UserName}) 10607 USER_CLEAR_ALL_AUDIT_LOG_ALERTS_FAILED Error Failed to remove all alerts. (User: USD{UserName}) 10700 MAC_POOL_ADD_SUCCESS Info MAC Pool 'USD{MacPoolName}' (id 10701 MAC_POOL_ADD_FAILED Error Failed to create MAC Pool 'USD{MacPoolName}'. (User: USD{UserName}) 10702 MAC_POOL_EDIT_SUCCESS Info MAC Pool 'USD{MacPoolName}' (id 10703 MAC_POOL_EDIT_FAILED Error Failed to update MAC Pool 'USD{MacPoolName}' (id 10704 MAC_POOL_REMOVE_SUCCESS Info MAC Pool 'USD{MacPoolName}' (id 10705 MAC_POOL_REMOVE_FAILED Error Failed to remove MAC Pool 'USD{MacPoolName}' (id 10750 CINDER_PROVIDER_ERROR Error An error occurred on Cinder provider: 'USD{CinderException}' 10751 CINDER_DISK_CONNECTION_FAILURE Error Failed to retrieve connection information for Cinder Disk 'USD{DiskAlias}'. 10752 CINDER_DISK_CONNECTION_VOLUME_DRIVER_UNSUPPORTED Error Unsupported volume driver for Cinder Disk 'USD{DiskAlias}'. 10753 USER_FINISHED_FAILED_REMOVE_CINDER_DISK Error Failed to remove disk USD{DiskAlias} from storage domain USD{StorageDomainName}. The following entity id could not be deleted from the Cinder provider 'USD{imageId}'. (User: USD{UserName}). 10754 USER_ADDED_LIBVIRT_SECRET Info Authentication Key USD{LibvirtSecretUUID} was added. (User: USD{UserName}). 10755 USER_FAILED_TO_ADD_LIBVIRT_SECRET Error Failed to add Authentication Key USD{LibvirtSecretUUID}. (User: USD{UserName}). 10756 USER_UPDATE_LIBVIRT_SECRET Info Authentication Key USD{LibvirtSecretUUID} was updated. (User: USD{UserName}). 10757 USER_FAILED_TO_UPDATE_LIBVIRT_SECRET Error Failed to update Authentication Key USD{LibvirtSecretUUID}. (User: USD{UserName}). 10758 USER_REMOVED_LIBVIRT_SECRET Info Authentication Key USD{LibvirtSecretUUID} was removed. (User: USD{UserName}). 10759 USER_FAILED_TO_REMOVE_LIBVIRT_SECRET Error Failed to remove Authentication Key USD{LibvirtSecretUUID}. (User: USD{UserName}). 10760 FAILED_TO_REGISTER_LIBVIRT_SECRET Error Failed to register Authentication Keys for storage domain USD{StorageDomainName} on host USD{VdsName}. 10761 FAILED_TO_UNREGISTER_LIBVIRT_SECRET Error Failed to unregister Authentication Keys for storage domain USD{StorageDomainName} on host USD{VdsName}. 10762 FAILED_TO_REGISTER_LIBVIRT_SECRET_ON_VDS Error Failed to register Authentication Keys on host USD{VdsName}. 10763 NO_LIBRBD_PACKAGE_AVAILABLE_ON_VDS Error Librbd1 package is not available on host USD{VdsName}, which is mandatory for using Cinder storage domains. 10764 FAILED_TO_FREEZE_VM Warning Failed to freeze guest filesystems on VM USD{VmName}. Note that using the created snapshot might cause data inconsistency. 10765 FAILED_TO_THAW_VM Warning Failed to thaw guest filesystems on VM USD{VmName}. The filesystems might be unresponsive until the VM is restarted. 10766 FREEZE_VM_INITIATED Normal Freeze of guest filesystems on VM USD{VmName} was initiated. 10767 FREEZE_VM_SUCCESS Normal Guest filesystems on VM USD{VmName} have been frozen successfully. 10768 THAW_VM_SUCCESS Normal Guest filesystems on VM USD{VmName} have been thawed successfully. 10769 USER_FAILED_TO_FREEZE_VM Warning Failed to freeze guest filesystems on USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 10770 USER_FAILED_TO_THAW_VM Warning Failed to thaw guest filesystems on USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 10771 VDS_CANNOT_CONNECT_TO_GLUSTERFS Error Host USD{VdsName} cannot connect to Glusterfs. Verify that glusterfs-cli package is installed on the host. 10780 AFFINITY_RULES_ENFORCEMENT_MANAGER_START Normal Affinity Rules Enforcement Manager started. 10781 AFFINITY_RULES_ENFORCEMENT_MANAGER_INTERVAL_REACHED Normal 10800 VM_ADD_HOST_DEVICES Info Host devices USD{NamesAdded} were attached to Vm USD{VmName} by User USD{UserName}. 10801 VM_REMOVE_HOST_DEVICES Info Host devices USD{NamesRemoved} were detached from Vm USD{VmName} by User USD{UserName}. 10802 VDS_BROKER_COMMAND_FAILURE Error VDSM USD{VdsName} command USD{CommandName} failed: USD{message} 10803 IRS_BROKER_COMMAND_FAILURE Error VDSM command USD{CommandName} failed: USD{message} 10804 VDS_UNKNOWN_HOST Error The address of host USD{VdsName} could not be determined 10810 SYSTEM_CHANGE_STORAGE_POOL_STATUS_UP_REPORTING_HOSTS Normal Data Center USD{StoragePoolName} status was changed to UP as some of its hosts are in status UP. 10811 SYSTEM_CHANGE_STORAGE_POOL_STATUS_NON_RESPONSIVE_NO_REPORTING_HOSTS Info Data Center USD{StoragePoolName} status was changed to Non Responsive as none of its hosts are in status UP. 10812 STORAGE_POOL_LOWER_THAN_ENGINE_HIGHEST_CLUSTER_LEVEL Info Data Center USD{StoragePoolName} compatibility version is USD{dcVersion}, which is lower than latest engine version USD{engineVersion}. Please upgrade your Data Center to latest version to successfully finish upgrade of your setup. 10900 HOST_SYNC_ALL_NETWORKS_FAILED Error Failed to sync all host USD{VdsName} networks 10901 HOST_SYNC_ALL_NETWORKS_FINISHED Info Managed to sync all host USD{VdsName} networks. 10902 PERSIST_HOST_SETUP_NETWORK_ON_HOST Info (USD{Sequence}/USD{Total}): Applying network's changes on host USD{VdsName}. (User: USD{UserName}) 10903 PERSIST_SETUP_NETWORK_ON_HOST_FINISHED Info (USD{Sequence}/USD{Total}): Successfully applied changes on host USD{VdsName}. (User: USD{UserName}) 10904 PERSIST_SETUP_NETWORK_ON_HOST_FAILED Error (USD{Sequence}/USD{Total}): Failed to apply changes on host USD{VdsName}. (User: USD{UserName}) 10905 CLUSTER_SYNC_ALL_NETWORKS_FAILED Error Failed to sync all cluster USD{ClusterName} networks 10906 CLUSTER_SYNC_ALL_NETWORKS_STARTED Info Started sync of all cluster USD{ClusterName} networks. 10910 NETWORK_REMOVE_NIC_FILTER_PARAMETER Info Network interface filter parameter (id USD{VmNicFilterParameterId}) was successfully removed by USD{UserName}. 10911 NETWORK_REMOVE_NIC_FILTER_PARAMETER_FAILED Error Failed to remove network interface filter parameter ((id USD{VmNicFilterParameterId}) by USD{UserName}. 10912 NETWORK_ADD_NIC_FILTER_PARAMETER Info Network interface filter parameter USD{VmNicFilterParameterName} (id USD{VmNicFilterParameterId}) was successfully added to Interface with id USD{VmInterfaceId} on VM USD{VmName} by USD{UserName}. 10913 NETWORK_ADD_NIC_FILTER_PARAMETER_FAILED Error Failed to add network interface filter parameter USD{VmNicFilterParameterName} (id USD{VmNicFilterParameterId}) to Interface with id USD{VmInterfaceId} on VM USD{VmName} by USD{UserName} by USD{UserName}. 10914 NETWORK_UPDATE_NIC_FILTER_PARAMETER Info Network interface filter parameter USD{VmNicFilterParameterName} (id USD{VmNicFilterParameterId}) on Interface with id USD{VmInterfaceId} on VM USD{VmName} was successfully updated by USD{UserName}. 10915 NETWORK_UPDATE_NIC_FILTER_PARAMETER_FAILED Error Failed to update network interface filter parameter USD{VmNicFilterParameterName} (id USD{VmNicFilterParameterId}) on Interface with id USD{VmInterfaceId} on VM USD{VmName} by USD{UserName}. 10916 MAC_ADDRESS_HAD_TO_BE_REALLOCATED Warning Some MAC addresses had to be reallocated because they are duplicate. 10917 MAC_ADDRESS_VIOLATES_NO_DUPLICATES_SETTING Error Duplicate MAC addresses had to be introduced into mac pool violating no duplicates setting. 10918 MAC_ADDRESS_COULDNT_BE_REALLOCATED Error Some MAC addresses had to be reallocated, but operation failed because of insufficient amount of free MACs. 10920 NETWORK_IMPORT_EXTERNAL_NETWORK Info Successfully initiated import of external network USD{NetworkName} from provider USD{ProviderName}. 10921 NETWORK_IMPORT_EXTERNAL_NETWORK_FAILED Error Failed to initiate external network USD{NetworkName} from provider USD{ProviderName}. 10922 NETWORK_IMPORT_EXTERNAL_NETWORK_INTERNAL Info 10923 NETWORK_IMPORT_EXTERNAL_NETWORK_INTERNAL_FAILED Error 10924 NETWORK_AUTO_DEFINE_NO_DEFAULT_EXTERNAL_PROVIDER Warning Cannot create auto-defined network connected to USD{NetworkName}. Cluster USD{ClusterName} does not have default external network provider. 11000 USER_ADD_EXTERNAL_JOB Info New external Job USD{description} was added by user USD{UserName} 11001 USER_ADD_EXTERNAL_JOB_FAILED Error Failed to add new external Job USD{description} 11500 FAULTY_MULTIPATHS_ON_HOST Warning Faulty multipath paths on host USD{VdsName} on devices: [USD{MpathGuids}] 11501 NO_FAULTY_MULTIPATHS_ON_HOST Normal No faulty multipath paths on host USD{VdsName} 11502 MULTIPATH_DEVICES_WITHOUT_VALID_PATHS_ON_HOST Warning Multipath devices without valid paths on host USD{VdsName} : [USD{MpathGuids}] 12000 MIGRATION_REASON_AFFINITY_ENFORCEMENT Info Affinity rules enforcement 12001 MIGRATION_REASON_LOAD_BALANCING Info Load balancing 12002 MIGRATION_REASON_HOST_IN_MAINTENANCE Info Host preparing for maintenance 12003 VM_MIGRATION_NOT_ALL_VM_NICS_WERE_PLUGGED_BACK Error After migration of USD{VmName}, following vm nics failed to be plugged back: USD{NamesOfNotRepluggedNics}. 12004 VM_MIGRATION_PLUGGING_VM_NICS_FAILED Error After migration of USD{VmName} vm nics failed to be plugged back. 12005 CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION Error Cannot update compatibility version of Vm/Template: [USD{VmName}], Message: USD{Message} 13000 DEPRECATED_API Warning Client from address "USD{ClientAddress}" is using version USD{ApiVersion} of the API, which has been \ 13001 DEPRECATED_IPTABLES_FIREWALL Warning Cluster USD{ClusterName} uses IPTables firewall, which has been deprecated in \ | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/appe-event_codes |
Compatibility Guide | Compatibility Guide Red Hat Ceph Storage 5 Red Hat Ceph Storage and Its Compatibility With Other Products Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/compatibility_guide/index |
30.8. Statistics Files in /sys | 30.8. Statistics Files in /sys Statistics for a running VDO volume may be read from files in the /sys/kvdo/ volume_name /statistics directory, where volume_name is the name of thhe VDO volume. This provides an alternate interface to the data produced by the vdostats utility suitable for access by shell scripts and management software. There are files in the statistics directory in addition to the ones listed in the table below. These additional statistics files are not guaranteed to be supported in future releases. Table 30.10. Statistics files File Description dataBlocksUsed The number of physical blocks currently in use by a VDO volume to store data. logicalBlocksUsed The number of logical blocks currently mapped. physicalBlocks The total number of physical blocks allocated for a VDO volume. logicalBlocks The maximum number of logical blocks that can be mapped by a VDO volume. mode Indicates whether a VDO volume is operating normally, is in recovery mode, or is in read-only mode. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/vdo-ig-integrating-sysfs-entries |
Chapter 9. File Systems | Chapter 9. File Systems The CephFS kernel client is fully supported with Red Hat Ceph Storage 3 The Ceph File System (CephFS) kernel module enables Red Hat Enterprise Linux nodes to mount Ceph File Systems from Red Hat Ceph Storage clusters. The kernel client in Red Hat Enterprise Linux is a more efficient alternative to the Filesystem in Userspace (FUSE) client included with Red Hat Ceph Storage. Note that the kernel client currently lacks support for CephFS quotas. The CephFS kernel client was introduced in Red Hat Enterprise Linux 7.3 as a Technology Preview, and since the release of Red Hat Ceph Storage 3, CephFS is fully supported. For more information, see the Ceph File System Guide for Red Hat Ceph Storage 3: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_file_system_guide/ . (BZ#1205497) XFS now supports modifying labels on mounted file systems You can now modify the label attribute of mounted XFS file systems using the xfs_io utility: Previously, it was only possible to modify labels on unmounted file systems using the xfs_admin utility, which is still supported. (BZ# 1322930 ) pNFS SCSI layout is now fully supported for client and server Client and server support for parallel NFS (pNFS) SCSI layouts is now fully supported. It was first introduced in Red Hat Enterprise Linux 7.3 as a Technology Preview. Building on the work of block layouts, the pNFS layout is defined across SCSI devices and contains sequential series of fixed-size blocks as logical units that must be capable of supporting SCSI persistent reservations. The Logical Unit (LU) devices are identified by their SCSI device identification, and fencing is handled through the assignment of reservations. (BZ#1305092) ima-evm-utils is now fully supported on AMD64 and Intel 64 The ima-evm-utils package is now fully supported when used on the AMD64 and Intel 64 architecture. Note that on other architectures, ima-evm-utils remains in Technology Preview. The ima-evm-utils package provides utilities to label the file system and verify the integrity of your system at run time using the Integrity Measurement Architecture (IMA) and Extended Verification Module (EVM) features. These utilities enable you to monitor if files have been accidentally or maliciously altered. (BZ#1627278) | [
"xfs_io -c \"label -s new-label\" /mount-point"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new_features_file_systems |
Installing Red Hat Virtualization as a standalone Manager with local databases | Installing Red Hat Virtualization as a standalone Manager with local databases Red Hat Virtualization 4.4 Installing the Red Hat Virtualization Manager and its databases on the same server Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document describes how to install a standalone Manager environment - where the Red Hat Virtualization Manager is installed on either a physical server or a virtual machine hosted in another environment - with the Manager database and the Data Warehouse service and database installed on the same machine as the Manager. If this is not the configuration you want to use, see the other Installation Options in the Product Guide . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_and_allocating_storage_resources/making-open-source-more-inclusive |
Chapter 2. ConsoleCLIDownload [console.openshift.io/v1] | Chapter 2. ConsoleCLIDownload [console.openshift.io/v1] Description ConsoleCLIDownload is an extension for configuring openshift web console command line interface (CLI) downloads. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleCLIDownloadSpec is the desired cli download configuration. 2.1.1. .spec Description ConsoleCLIDownloadSpec is the desired cli download configuration. Type object Required description displayName links Property Type Description description string description is the description of the CLI download (can include markdown). displayName string displayName is the display name of the CLI download. links array links is a list of objects that provide CLI download link details. links[] object 2.1.2. .spec.links Description links is a list of objects that provide CLI download link details. Type array 2.1.3. .spec.links[] Description Type object Required href Property Type Description href string href is the absolute secure URL for the link (must use https) text string text is the display text for the link 2.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleclidownloads DELETE : delete collection of ConsoleCLIDownload GET : list objects of kind ConsoleCLIDownload POST : create a ConsoleCLIDownload /apis/console.openshift.io/v1/consoleclidownloads/{name} DELETE : delete a ConsoleCLIDownload GET : read the specified ConsoleCLIDownload PATCH : partially update the specified ConsoleCLIDownload PUT : replace the specified ConsoleCLIDownload /apis/console.openshift.io/v1/consoleclidownloads/{name}/status GET : read status of the specified ConsoleCLIDownload PATCH : partially update status of the specified ConsoleCLIDownload PUT : replace status of the specified ConsoleCLIDownload 2.2.1. /apis/console.openshift.io/v1/consoleclidownloads HTTP method DELETE Description delete collection of ConsoleCLIDownload Table 2.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleCLIDownload Table 2.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownloadList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleCLIDownload Table 2.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.4. Body parameters Parameter Type Description body ConsoleCLIDownload schema Table 2.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 201 - Created ConsoleCLIDownload schema 202 - Accepted ConsoleCLIDownload schema 401 - Unauthorized Empty 2.2.2. /apis/console.openshift.io/v1/consoleclidownloads/{name} Table 2.6. Global path parameters Parameter Type Description name string name of the ConsoleCLIDownload HTTP method DELETE Description delete a ConsoleCLIDownload Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleCLIDownload Table 2.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleCLIDownload Table 2.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleCLIDownload Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. Body parameters Parameter Type Description body ConsoleCLIDownload schema Table 2.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 201 - Created ConsoleCLIDownload schema 401 - Unauthorized Empty 2.2.3. /apis/console.openshift.io/v1/consoleclidownloads/{name}/status Table 2.15. Global path parameters Parameter Type Description name string name of the ConsoleCLIDownload HTTP method GET Description read status of the specified ConsoleCLIDownload Table 2.16. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleCLIDownload Table 2.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleCLIDownload Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body ConsoleCLIDownload schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 201 - Created ConsoleCLIDownload schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/console_apis/consoleclidownload-console-openshift-io-v1 |
Chapter 2. Differences from upstream OpenJDK 11 | Chapter 2. Differences from upstream OpenJDK 11 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 11 changes: FIPS support. Red Hat build of OpenJDK 11 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 11 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 11 obtains the list of enabled cryptographic algorithms and key size constraints from RHEL. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources For more information about detecting if a system is in FIPS mode, see the Improve system FIPS detection example on the Red Hat RHEL Planning Jira. For more information about cryptographic policies, see Using system-wide cryptographic policies . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.17/rn-openjdk-diff-from-upstream |
Chapter 34. Elasticsearch | Chapter 34. Elasticsearch Since Camel 3.18.3 Only producer is supported The ElasticSearch component allows you to interface with an ElasticSearch 8.x API using the Java API Client library. 34.1. Dependencies When using elasticsearch with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-elasticsearch-starter</artifactId> </dependency> 34.2. URI format 34.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 34.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 34.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 34.4. Component Options The Elasticsearch component supports 14 options, which are listed below. Name Description Default Type connectionTimeout (producer) The time in ms to wait before connection will timeout. 30000 int hostAddresses (producer) Comma separated list with ip:port formatted remote transport addresses to use. The ip and port options must be left blank for hostAddresses to be considered instead. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean maxRetryTimeout (producer) The time in ms before retry. 30000 int socketTimeout (producer) The timeout in ms to wait before the socket will timeout. 30000 int autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean client (advanced) Autowired To use an existing configured Elasticsearch client, instead of creating a client per endpoint. This allow to customize the client with specific settings. RestClient enableSniffer (advanced) Enable automatically discover nodes from a running Elasticsearch cluster. If this option is used in conjunction with Spring Boot then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot). false boolean sniffAfterFailureDelay (advanced) The delay of a sniff execution scheduled after a failure (in milliseconds). 60000 int snifferInterval (advanced) The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions. 300000 int certificatePath (security) The path of the self-signed certificate to use to access to Elasticsearch. String enableSSL (security) Enable SSL. false boolean password (security) Password for authenticate. String user (security) Basic authenticate user. String 34.5. Endpoint Options The Elasticsearch endpoint is configured using URI syntax: with the following path and query parameters: 34.5.1. Path Parameters (1 parameters) Name Description Default Type clusterName (producer) Required Name of the cluster. String 34.5.2. Query Parameters (19 parameters) Name Description Default Type connectionTimeout (producer) The time in ms to wait before connection will timeout. 30000 int disconnect (producer) Disconnect after it finish calling the producer. false boolean from (producer) Starting index of the response. Integer hostAddresses (producer) Comma separated list with ip:port formatted remote transport addresses to use. String indexName (producer) The name of the index to act against. String maxRetryTimeout (producer) The time in ms before retry. 30000 int operation (producer) What operation to perform. Enum values: Index Update Bulk GetById MultiGet MultiSearch Delete DeleteIndex Search Exists Ping ElasticsearchOperation scrollKeepAliveMs (producer) Time in ms during which elasticsearch will keep search context alive. 60000 int size (producer) Size of the response. Integer socketTimeout (producer) The timeout in ms to wait before the socket will timeout. 30000 int useScroll (producer) Enable scroll usage. false boolean waitForActiveShards (producer) Index creation waits for the write consistency number of shards to be available. 1 int lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean documentClass (advanced) The class to use when deserializing the documents. ObjectNode Class enableSniffer (advanced) Enable automatically discover nodes from a running Elasticsearch cluster. If this option is used in conjunction with Spring Boot then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot). false boolean sniffAfterFailureDelay (advanced) The delay of a sniff execution scheduled after a failure (in milliseconds). 60000 int snifferInterval (advanced) The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions. 300000 int certificatePath (security) The path of the self-signed certificate to use to access to Elasticsearch. String enableSSL (security) Enable SSL. false boolean 34.6. Message Headers The Elasticsearch component supports 9 message header(s), which is/are listed below: Name Description Default Type operation (producer) Constant: PARAM_OPERATION The operation to perform. Enum values: Index Update Bulk GetById MultiGet MultiSearch Delete DeleteIndex Search Exists Ping ElasticsearchOperation indexId (producer) Constant: PARAM_INDEX_ID The id of the indexed document. String indexName (producer) Constant: PARAM_INDEX_NAME The name of the index to act against. String documentClass (producer) Constant: PARAM_DOCUMENT_CLASS The full qualified name of the class of the document to unmarshall. ObjectNode Class waitForActiveShards (producer) Constant: PARAM_WAIT_FOR_ACTIVE_SHARDS The index creation waits for the write consistency number of shards to be available. Integer scrollKeepAliveMs (producer) Constant: PARAM_SCROLL_KEEP_ALIVE_MS The starting index of the response. Integer useScroll (producer) Constant: PARAM_SCROLL Set to true to enable scroll usage. Boolean size (producer) Constant: PARAM_SIZE The size of the response. Integer from (producer) Constant: PARAM_FROM The starting index of the response. Integer 34.7. Message Operations The following ElasticSearch operations are currently supported. Simply set an endpoint URI option or exchange header with a key of "operation" and a value set to one of the following. Some operations also require other parameters or the message body to be set. operation message body description Index Map , String , byte[] , Reader , InputStream or IndexRequest.Builder content to index Adds content to an index and returns the content's indexId in the body. You can set the name of the target index by setting the message header with the key "indexName". You can set the indexId by setting the message header with the key "indexId". GetById String or GetRequest.Builder index id of content to retrieve Retrieves the document corresponding to the given index id and returns a GetResponse object in the body. You can set the name of the target index by setting the message header with the key "indexName". You can set the type of document by setting the message header with the key "documentClass". Delete String or DeleteRequest.Builder index id of content to delete Deletes the specified indexName and returns a Result object in the body. You can set the name of the target index by setting the message header with the key "indexName". DeleteIndex String or DeleteIndexRequest.Builder index name of the index to delete Deletes the specified indexName and returns a status code in the body. You can set the name of the target index by setting the message header with the key "indexName". Bulk Iterable or BulkRequest.Builder of any type that is already accepted (DeleteOperation.Builder for delete operation, UpdateOperation.Builder for update operation, CreateOperation.Builder for create operation, byte[], InputStream, String, Reader, Map or any document type for index operation) Adds/Updates/Deletes content from/to an index and returns a List<BulkResponseItem> object in the body You can set the name of the target index by setting the message header with the key "indexName". Search Map , String or SearchRequest.Builder Search the content with the map of query string. You can set the name of the target index by setting the message header with the key "indexName". You can set the number of hits to return by setting the message header with the key "size". You can set the starting document offset by setting the message header with the key "from". MultiSearch MsearchRequest.Builder Multiple search in one MultiGet Iterable<String> or MgetRequest.Builder the id of the document to retrieve Multiple get in one You can set the name of the target index by setting the message header with the key "indexName". Exists None Checks whether the index exists or not and returns a Boolean flag in the body. You must set the name of the target index by setting the message header with the key "indexName". Update byte[] , InputStream , String , Reader , Map or any document type content to update Updates content to an index and returns the content's indexId in the body. You can set the name of the target index by setting the message header with the key "indexName". You can set the indexId by setting the message header with the key "indexId". Ping None Pings the Elasticsearch cluster and returns true if the ping succeeded, false otherwise 34.8. Configure the component and enable basic authentication To use the Elasticsearch component it has to be configured with a minimum configuration. ElasticsearchComponent elasticsearchComponent = new ElasticsearchComponent(); elasticsearchComponent.setHostAddresses("myelkhost:9200"); camelContext.addComponent("elasticsearch", elasticsearchComponent); For basic authentication with elasticsearch or using reverse http proxy in front of the elasticsearch cluster, simply setup basic authentication and SSL on the component like the example below ElasticsearchComponent elasticsearchComponent = new ElasticsearchComponent(); elasticsearchComponent.setHostAddresses("myelkhost:9200"); elasticsearchComponent.setUser("elkuser"); elasticsearchComponent.setPassword("secure!!"); elasticsearchComponent.setEnableSSL(true); elasticsearchComponent.setCertificatePath(certPath); camelContext.addComponent("elasticsearch", elasticsearchComponent); 34.9. Index Example Below is a simple INDEX example from("direct:index") .to("elasticsearch://elasticsearch?operation=Index&indexName=twitter"); <route> <from uri="direct:index"/> <to uri="elasticsearch://elasticsearch?operation=Index&indexName=twitter"/> </route> For this operation you need to specify a indexId header. A client would simply need to pass a body message containing a Map to the route. The result body contains the indexId created. Map<String, String> map = new HashMap<String, String>(); map.put("content", "test"); String indexId = template.requestBody("direct:index", map, String.class); 34.10. Search Example Searching on specific field(s) and value use the Operation \u0301Search \u0301. Pass in the query JSON String or the Map from("direct:search") .to("elasticsearch://elasticsearch?operation=Search&indexName=twitter"); <route> <from uri="direct:search"/> <to uri="elasticsearch://elasticsearch?operation=Search&indexName=twitter"/> </route> String query = "{\"query\":{\"match\":{\"doc.content\":\"new release of ApacheCamel\"}}}"; HitsMetadata<?> response = template.requestBody("direct:search", query, HitsMetadata.class); Search on specific field(s) using Map. Map<String, Object> actualQuery = new HashMap<>(); actualQuery.put("doc.content", "new release of ApacheCamel"); Map<String, Object> match = new HashMap<>(); match.put("match", actualQuery); Map<String, Object> query = new HashMap<>(); query.put("query", match); HitsMetadata<?> response = template.requestBody("direct:search", query, HitsMetadata.class); Search using Elasticsearch scroll api in order to fetch all results. from("direct:search") .to("elasticsearch://elasticsearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000"); <route> <from uri="direct:search"/> <to uri="elasticsearch://elasticsearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000"/> </route> String query = "{\"query\":{\"match\":{\"doc.content\":\"new release of ApacheCamel\"}}}"; try (ElasticsearchScrollRequestIterator response = template.requestBody("direct:search", query, ElasticsearchScrollRequestIterator.class)) { // do something smart with results } can also be used. from("direct:search") .to("elasticsearch://elasticsearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000") .split() .body() .streaming() .to("mock:output") .end(); 34.11. MultiSearch Example MultiSearching on specific field(s) and value use the Operation \u0301MultiSearch \u0301. Pass in the MultiSearchRequest instance from("direct:multiSearch") .to("elasticsearch://elasticsearch?operation=MultiSearch"); <route> <from uri="direct:multiSearch"/> <to uri="elasticsearch://elasticsearch?operation=MultiSearch"/> </route> MultiSearch on specific field(s) MsearchRequest.Builder builder = new MsearchRequest.Builder().index("twitter").searches( new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build(), new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build()); List<MultiSearchResponseItem<?>> response = template.requestBody("direct:multiSearch", builder, List.class); 34.12. Document type For all the search operations, it is possible to indicate the type of document to retrieve in order to get the result already unmarshalled with the expected type. The document type can be set using the header "documentClass" or via the uri parameter of the same name. 34.13. Using Camel Elasticsearch with Spring Boot When you use camel-elasticsearch-starter with Spring Boot v2, then you must declare the following dependency in your own pom.xml . <dependency> <groupId>jakarta.json</groupId> <artifactId>jakarta.json-api</artifactId> <version>2.0.2</version> </dependency> This is needed because Spring Boot v2 provides jakarta.json-api:1.1.6, and Elasticsearch requires to use json-api v2. 34.13.1. Use RestClient provided by Spring Boot By default Spring Boot will auto configure an Elasticsearch RestClient that will be used by camel, it is possible to customize the client with the following basic properties: spring.elasticsearch.uris=myelkhost:9200 spring.elasticsearch.username=elkuser spring.elasticsearch.password=secure!! More information can be found in application-properties.data.spring.elasticsearch.connection-timeout . 34.13.2. Disable Sniffer when using Spring Boot When Spring Boot is on the classpath the Sniffer client for Elasticsearch is enabled by default. This option can be disabled in the Spring Boot Configuration: spring: autoconfigure: exclude: org.springframework.boot.autoconfigure.elasticsearch.ElasticsearchRestClientAutoConfiguration 34.14. Spring Boot Auto-Configuration The component supports 15 options, which are listed below. Name Description Default Type camel.component.elasticsearch.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.elasticsearch.certificate-path The path of the self-signed certificate to use to access to Elasticsearch. String camel.component.elasticsearch.client To use an existing configured Elasticsearch client, instead of creating a client per endpoint. This allow to customize the client with specific settings. The option is a org.elasticsearch.client.RestClient type. RestClient camel.component.elasticsearch.connection-timeout The time in ms to wait before connection will timeout. 30000 Integer camel.component.elasticsearch.enable-s-s-l Enable SSL. false Boolean camel.component.elasticsearch.enable-sniffer Enable automatically discover nodes from a running Elasticsearch cluster. If this option is used in conjunction with Spring Boot then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot). false Boolean camel.component.elasticsearch.enabled Whether to enable auto configuration of the elasticsearch component. This is enabled by default. Boolean camel.component.elasticsearch.host-addresses Comma separated list with ip:port formatted remote transport addresses to use. The ip and port options must be left blank for hostAddresses to be considered instead. String camel.component.elasticsearch.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.elasticsearch.max-retry-timeout The time in ms before retry. 30000 Integer camel.component.elasticsearch.password Password for authenticate. String camel.component.elasticsearch.sniff-after-failure-delay The delay of a sniff execution scheduled after a failure (in milliseconds). 60000 Integer camel.component.elasticsearch.sniffer-interval The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions. 300000 Integer camel.component.elasticsearch.socket-timeout The timeout in ms to wait before the socket will timeout. 30000 Integer camel.component.elasticsearch.user Basic authenticate user. String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-elasticsearch-starter</artifactId> </dependency>",
"elasticsearch://clusterName[?options]",
"elasticsearch:clusterName",
"ElasticsearchComponent elasticsearchComponent = new ElasticsearchComponent(); elasticsearchComponent.setHostAddresses(\"myelkhost:9200\"); camelContext.addComponent(\"elasticsearch\", elasticsearchComponent);",
"ElasticsearchComponent elasticsearchComponent = new ElasticsearchComponent(); elasticsearchComponent.setHostAddresses(\"myelkhost:9200\"); elasticsearchComponent.setUser(\"elkuser\"); elasticsearchComponent.setPassword(\"secure!!\"); elasticsearchComponent.setEnableSSL(true); elasticsearchComponent.setCertificatePath(certPath); camelContext.addComponent(\"elasticsearch\", elasticsearchComponent);",
"from(\"direct:index\") .to(\"elasticsearch://elasticsearch?operation=Index&indexName=twitter\");",
"<route> <from uri=\"direct:index\"/> <to uri=\"elasticsearch://elasticsearch?operation=Index&indexName=twitter\"/> </route>",
"Map<String, String> map = new HashMap<String, String>(); map.put(\"content\", \"test\"); String indexId = template.requestBody(\"direct:index\", map, String.class);",
"from(\"direct:search\") .to(\"elasticsearch://elasticsearch?operation=Search&indexName=twitter\");",
"<route> <from uri=\"direct:search\"/> <to uri=\"elasticsearch://elasticsearch?operation=Search&indexName=twitter\"/> </route>",
"String query = \"{\\\"query\\\":{\\\"match\\\":{\\\"doc.content\\\":\\\"new release of ApacheCamel\\\"}}}\"; HitsMetadata<?> response = template.requestBody(\"direct:search\", query, HitsMetadata.class);",
"Map<String, Object> actualQuery = new HashMap<>(); actualQuery.put(\"doc.content\", \"new release of ApacheCamel\"); Map<String, Object> match = new HashMap<>(); match.put(\"match\", actualQuery); Map<String, Object> query = new HashMap<>(); query.put(\"query\", match); HitsMetadata<?> response = template.requestBody(\"direct:search\", query, HitsMetadata.class);",
"from(\"direct:search\") .to(\"elasticsearch://elasticsearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000\");",
"<route> <from uri=\"direct:search\"/> <to uri=\"elasticsearch://elasticsearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000\"/> </route>",
"String query = \"{\\\"query\\\":{\\\"match\\\":{\\\"doc.content\\\":\\\"new release of ApacheCamel\\\"}}}\"; try (ElasticsearchScrollRequestIterator response = template.requestBody(\"direct:search\", query, ElasticsearchScrollRequestIterator.class)) { // do something smart with results }",
"from(\"direct:search\") .to(\"elasticsearch://elasticsearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000\") .split() .body() .streaming() .to(\"mock:output\") .end();",
"from(\"direct:multiSearch\") .to(\"elasticsearch://elasticsearch?operation=MultiSearch\");",
"<route> <from uri=\"direct:multiSearch\"/> <to uri=\"elasticsearch://elasticsearch?operation=MultiSearch\"/> </route>",
"MsearchRequest.Builder builder = new MsearchRequest.Builder().index(\"twitter\").searches( new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build(), new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build()); List<MultiSearchResponseItem<?>> response = template.requestBody(\"direct:multiSearch\", builder, List.class);",
"<dependency> <groupId>jakarta.json</groupId> <artifactId>jakarta.json-api</artifactId> <version>2.0.2</version> </dependency>",
"spring.elasticsearch.uris=myelkhost:9200 spring.elasticsearch.username=elkuser spring.elasticsearch.password=secure!!",
"spring: autoconfigure: exclude: org.springframework.boot.autoconfigure.elasticsearch.ElasticsearchRestClientAutoConfiguration"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-elasticsearch-component-starter |
Chapter 211. Language Component | Chapter 211. Language Component Available as of Camel version 2.5 The language component allows you to send Exchange to an endpoint which executes a script by any of the supported Languages in Camel. By having a component to execute language scripts, it allows more dynamic routing capabilities. For example by using the Routing Slip or Dynamic Router EIPs you can send messages to language endpoints where the script is dynamic defined as well. This component is provided out of the box in camel-core and hence no additional JARs is needed. You only have to include additional Camel components if the language of choice mandates it, such as using Groovy or JavaScript languages. 211.1. URI format And from Camel 2.11 onwards you can refer to an external resource for the script using same notation as supported by the other Language s in Camel 211.2. URI Options The Language component has no options. The Language endpoint is configured using URI syntax: with the following path and query parameters: 211.2.1. Path Parameters (2 parameters): Name Description Default Type languageName Required Sets the name of the language to use String resourceUri Path to the resource, or a reference to lookup a bean in the Registry to use as the resource String 211.2.2. Query Parameters (6 parameters): Name Description Default Type binary (producer) Whether the script is binary content or text content. By default the script is read as text content (eg java.lang.String) false boolean cacheScript (producer) Whether to cache the compiled script and reuse Notice reusing the script can cause side effects from processing one Camel org.apache.camel.Exchange to the org.apache.camel.Exchange. false boolean contentCache (producer) Sets whether to use resource content cache or not. false boolean script (producer) Sets the script to execute String transform (producer) Whether or not the result of the script should be used as message body. This options is default true. true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 211.3. Message Headers The following message headers can be used to affect the behavior of the component Header Description CamelLanguageScript The script to execute provided in the header. Takes precedence over script configured on the endpoint. 211.4. Examples For example you can use the Simple language to Message Translator a message: In case you want to convert the message body type you can do this as well: You can also use the Groovy language, such as this example where the input message will by multiplied with 2: You can also provide the script as a header as shown below. Here we use XPath language to extract the text from the <foo> tag. Object out = producer.requestBodyAndHeader("language:xpath", "<foo>Hello World</foo>", Exchange.LANGUAGE_SCRIPT, "/foo/text()"); assertEquals("Hello World", out); 211.5. Loading scripts from resources Available as of Camel 2.9 You can specify a resource uri for a script to load in either the endpoint uri, or in the Exchange.LANGUAGE_SCRIPT header. The uri must start with one of the following schemes: file:, classpath:, or http: For example to load a script from the classpath: By default the script is loaded once and cached. However you can disable the contentCache option and have the script loaded on each evaluation. For example if the file myscript.txt is changed on disk, then the updated script is used: From Camel 2.11 onwards you can refer to the resource similar to the other Language s in Camel by prefixing with "resource:" as shown below: | [
"language://languageName[:script][?options]",
"language://languageName:resource:scheme:location][?options]",
"language:languageName:resourceUri",
"Object out = producer.requestBodyAndHeader(\"language:xpath\", \"<foo>Hello World</foo>\", Exchange.LANGUAGE_SCRIPT, \"/foo/text()\"); assertEquals(\"Hello World\", out);"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/language-component |
Chapter 303. Shiro Security Component | Chapter 303. Shiro Security Component Available as of Camel 2.5 The shiro-security component in Camel is a security focused component, based on the Apache Shiro security project. Apache Shiro is a powerful and flexible open-source security framework that cleanly handles authentication, authorization, enterprise session management and cryptography. The objective of the Apache Shiro project is to provide the most robust and comprehensive application security framework available while also being very easy to understand and extremely simple to use. This camel shiro-security component allows authentication and authorization support to be applied to different segments of a camel route. Shiro security is applied on a route using a Camel Policy. A Policy in Camel utilizes a strategy pattern for applying interceptors on Camel Processors. It offering the ability to apply cross-cutting concerns (for example. security, transactions etc) on sections/segments of a camel route. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-shiro</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 303.1. Shiro Security Basics To employ Shiro security on a camel route, a ShiroSecurityPolicy object must be instantiated with security configuration details (including users, passwords, roles etc). This object must then be applied to a camel route. This ShiroSecurityPolicy Object may also be registered in the Camel registry (JNDI or ApplicationContextRegistry) and then utilized on other routes in the Camel Context. Configuration details are provided to the ShiroSecurityPolicy using an Ini file (properties file) or an Ini object. The Ini file is a standard Shiro configuration file containing user/role details as shown below [users] # user 'ringo' with password 'starr' and the 'sec-level1' role ringo = starr, sec-level1 george = harrison, sec-level2 john = lennon, sec-level3 paul = mccartney, sec-level3 [roles] # 'sec-level3' role has all permissions, indicated by the # wildcard '*' sec-level3 = * # The 'sec-level2' role can do anything with access of permission # readonly (*) to help sec-level2 = zone1:* # The 'sec-level1' role can do anything with access of permission # readonly sec-level1 = zone1:readonly:* 303.2. Instantiating a ShiroSecurityPolicy Object A ShiroSecurityPolicy object is instantiated as follows private final String iniResourcePath = "classpath:shiro.ini"; private final byte[] passPhrase = { (byte) 0x08, (byte) 0x09, (byte) 0x0A, (byte) 0x0B, (byte) 0x0C, (byte) 0x0D, (byte) 0x0E, (byte) 0x0F, (byte) 0x10, (byte) 0x11, (byte) 0x12, (byte) 0x13, (byte) 0x14, (byte) 0x15, (byte) 0x16, (byte) 0x17}; List<permission> permissionsList = new ArrayList<permission>(); Permission permission = new WildcardPermission("zone1:readwrite:*"); permissionsList.add(permission); final ShiroSecurityPolicy securityPolicy = new ShiroSecurityPolicy(iniResourcePath, passPhrase, true, permissionsList); 303.3. ShiroSecurityPolicy Options Name Default Value Type Description iniResourcePath or ini none Resource String or Ini Object A mandatory Resource String for the iniResourcePath or an instance of an Ini object must be passed to the security policy. Resources can be acquired from the file system, classpath, or URLs when prefixed with "file:, classpath:, or url:" respectively. For e.g "classpath:shiro.ini" passPhrase An AES 128 based key byte[] A passPhrase to decrypt ShiroSecurityToken(s) sent along with Message Exchanges alwaysReauthenticate true boolean Setting to ensure re-authentication on every individual request. If set to false, the user is authenticated and locked such than only requests from the same user going forward are authenticated. permissionsList none List<Permission> A List of permissions required in order for an authenticated user to be authorized to perform further action i.e continue further on the route. If no Permissions list or Roles List (see below) is provided to the ShiroSecurityPolicy object, then authorization is deemed as not required. Note that the default is that authorization is granted if any of the Permission Objects in the list are applicable. rolesList none List<String> Camel 2.13: A List of roles required in order for an authenticated user to be authorized to perform further action i.e continue further on the route. If no roles list or permissions list (see above) is provided to the ShiroSecurityPolicy object, then authorization is deemed as not required. Note that the default is that authorization is granted if any of the roles in the list are applicable. cipherService AES org.apache.shiro.crypto.CipherService Shiro ships with AES & Blowfish based CipherServices. You may use one these or pass in your own Cipher implementation base64 false boolean Camel 2.12: To use base64 encoding for the security token header, which allows transferring the header over JMS etc. This option must also be set on ShiroSecurityTokenInjector as well. allPermissionsRequired false boolean Camel 2.13: The default is that authorization is granted if any of the Permission Objects in the permissionsList parameter are applicable. Set this to true to require all of the Permissions to be met. allRolesRequired false boolean Camel 2.13: The default is that authorization is granted if any of the roles in the rolesList parameter are applicable. Set this to true to require all of the roles to be met. 303.4. Applying Shiro Authentication on a Camel Route The ShiroSecurityPolicy, tests and permits incoming message exchanges containing a encrypted SecurityToken in the Message Header to proceed further following proper authentication. The SecurityToken object contains a Username/Password details that are used to determine where the user is a valid user. protected RouteBuilder createRouteBuilder() throws Exception { final ShiroSecurityPolicy securityPolicy = new ShiroSecurityPolicy("classpath:shiro.ini", passPhrase); return new RouteBuilder() { public void configure() { onException(UnknownAccountException.class). to("mock:authenticationException"); onException(IncorrectCredentialsException.class). to("mock:authenticationException"); onException(LockedAccountException.class). to("mock:authenticationException"); onException(AuthenticationException.class). to("mock:authenticationException"); from("direct:secureEndpoint"). to("log:incoming payload"). policy(securityPolicy). to("mock:success"); } }; } 303.5. Applying Shiro Authorization on a Camel Route Authorization can be applied on a camel route by associating a Permissions List with the ShiroSecurityPolicy. The Permissions List specifies the permissions necessary for the user to proceed with the execution of the route segment. If the user does not have the proper permission set, the request is not authorized to continue any further. protected RouteBuilder createRouteBuilder() throws Exception { final ShiroSecurityPolicy securityPolicy = new ShiroSecurityPolicy("./src/test/resources/securityconfig.ini", passPhrase); return new RouteBuilder() { public void configure() { onException(UnknownAccountException.class). to("mock:authenticationException"); onException(IncorrectCredentialsException.class). to("mock:authenticationException"); onException(LockedAccountException.class). to("mock:authenticationException"); onException(AuthenticationException.class). to("mock:authenticationException"); from("direct:secureEndpoint"). to("log:incoming payload"). policy(securityPolicy). to("mock:success"); } }; } 303.6. Creating a ShiroSecurityToken and injecting it into a Message Exchange A ShiroSecurityToken object may be created and injected into a Message Exchange using a Shiro Processor called ShiroSecurityTokenInjector. An example of injecting a ShiroSecurityToken using a ShiroSecurityTokenInjector in the client is shown below ShiroSecurityToken shiroSecurityToken = new ShiroSecurityToken("ringo", "starr"); ShiroSecurityTokenInjector shiroSecurityTokenInjector = new ShiroSecurityTokenInjector(shiroSecurityToken, passPhrase); from("direct:client"). process(shiroSecurityTokenInjector). to("direct:secureEndpoint"); 303.7. Sending Messages to routes secured by a ShiroSecurityPolicy Messages and Message Exchanges sent along the camel route where the security policy is applied need to be accompanied by a SecurityToken in the Exchange Header. The SecurityToken is an encrypted object that holds a Username and Password. The SecurityToken is encrypted using AES 128 bit security by default and can be changed to any cipher of your choice. Given below is an example of how a request may be sent using a ProducerTemplate in Camel along with a SecurityToken @Test public void testSuccessfulShiroAuthenticationWithNoAuthorization() throws Exception { //Incorrect password ShiroSecurityToken shiroSecurityToken = new ShiroSecurityToken("ringo", "stirr"); // TestShiroSecurityTokenInjector extends ShiroSecurityTokenInjector TestShiroSecurityTokenInjector shiroSecurityTokenInjector = new TestShiroSecurityTokenInjector(shiroSecurityToken, passPhrase); successEndpoint.expectedMessageCount(1); failureEndpoint.expectedMessageCount(0); template.send("direct:secureEndpoint", shiroSecurityTokenInjector); successEndpoint.assertIsSatisfied(); failureEndpoint.assertIsSatisfied(); } 303.8. Sending Messages to routes secured by a ShiroSecurityPolicy (much easier from Camel 2.12 onwards) From Camel 2.12 onwards its even easier as you can provide the subject in two different ways. 303.8.1. Using ShiroSecurityToken You can send a message to a Camel route with a header of key ShiroSecurityConstants.SHIRO_SECURITY_TOKEN of the type org.apache.camel.component.shiro.security.ShiroSecurityToken that contains the username and password. For example: ShiroSecurityToken shiroSecurityToken = new ShiroSecurityToken("ringo", "starr"); template.sendBodyAndHeader("direct:secureEndpoint", "Beatle Mania", ShiroSecurityConstants.SHIRO_SECURITY_TOKEN, shiroSecurityToken); You can also provide the username and password in two different headers as shown below: Map<String, Object> headers = new HashMap<String, Object>(); headers.put(ShiroSecurityConstants.SHIRO_SECURITY_USERNAME, "ringo"); headers.put(ShiroSecurityConstants.SHIRO_SECURITY_PASSWORD, "starr"); template.sendBodyAndHeaders("direct:secureEndpoint", "Beatle Mania", headers); When you use the username and password headers, then the ShiroSecurityPolicy in the Camel route will automatic transform those into a single header with key ShiroSecurityConstants.SHIRO_SECURITY_TOKEN with the token. Then token is either a ShiroSecurityToken instance, or a base64 representation as a String (the latter is when you have set base64=true). | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-shiro</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"user 'ringo' with password 'starr' and the 'sec-level1' role ringo = starr, sec-level1 george = harrison, sec-level2 john = lennon, sec-level3 paul = mccartney, sec-level3 'sec-level3' role has all permissions, indicated by the wildcard '*' sec-level3 = * The 'sec-level2' role can do anything with access of permission readonly (*) to help sec-level2 = zone1:* The 'sec-level1' role can do anything with access of permission readonly sec-level1 = zone1:readonly:*",
"private final String iniResourcePath = \"classpath:shiro.ini\"; private final byte[] passPhrase = { (byte) 0x08, (byte) 0x09, (byte) 0x0A, (byte) 0x0B, (byte) 0x0C, (byte) 0x0D, (byte) 0x0E, (byte) 0x0F, (byte) 0x10, (byte) 0x11, (byte) 0x12, (byte) 0x13, (byte) 0x14, (byte) 0x15, (byte) 0x16, (byte) 0x17}; List<permission> permissionsList = new ArrayList<permission>(); Permission permission = new WildcardPermission(\"zone1:readwrite:*\"); permissionsList.add(permission); final ShiroSecurityPolicy securityPolicy = new ShiroSecurityPolicy(iniResourcePath, passPhrase, true, permissionsList);",
"protected RouteBuilder createRouteBuilder() throws Exception { final ShiroSecurityPolicy securityPolicy = new ShiroSecurityPolicy(\"classpath:shiro.ini\", passPhrase); return new RouteBuilder() { public void configure() { onException(UnknownAccountException.class). to(\"mock:authenticationException\"); onException(IncorrectCredentialsException.class). to(\"mock:authenticationException\"); onException(LockedAccountException.class). to(\"mock:authenticationException\"); onException(AuthenticationException.class). to(\"mock:authenticationException\"); from(\"direct:secureEndpoint\"). to(\"log:incoming payload\"). policy(securityPolicy). to(\"mock:success\"); } }; }",
"protected RouteBuilder createRouteBuilder() throws Exception { final ShiroSecurityPolicy securityPolicy = new ShiroSecurityPolicy(\"./src/test/resources/securityconfig.ini\", passPhrase); return new RouteBuilder() { public void configure() { onException(UnknownAccountException.class). to(\"mock:authenticationException\"); onException(IncorrectCredentialsException.class). to(\"mock:authenticationException\"); onException(LockedAccountException.class). to(\"mock:authenticationException\"); onException(AuthenticationException.class). to(\"mock:authenticationException\"); from(\"direct:secureEndpoint\"). to(\"log:incoming payload\"). policy(securityPolicy). to(\"mock:success\"); } }; }",
"ShiroSecurityToken shiroSecurityToken = new ShiroSecurityToken(\"ringo\", \"starr\"); ShiroSecurityTokenInjector shiroSecurityTokenInjector = new ShiroSecurityTokenInjector(shiroSecurityToken, passPhrase); from(\"direct:client\"). process(shiroSecurityTokenInjector). to(\"direct:secureEndpoint\");",
"@Test public void testSuccessfulShiroAuthenticationWithNoAuthorization() throws Exception { //Incorrect password ShiroSecurityToken shiroSecurityToken = new ShiroSecurityToken(\"ringo\", \"stirr\"); // TestShiroSecurityTokenInjector extends ShiroSecurityTokenInjector TestShiroSecurityTokenInjector shiroSecurityTokenInjector = new TestShiroSecurityTokenInjector(shiroSecurityToken, passPhrase); successEndpoint.expectedMessageCount(1); failureEndpoint.expectedMessageCount(0); template.send(\"direct:secureEndpoint\", shiroSecurityTokenInjector); successEndpoint.assertIsSatisfied(); failureEndpoint.assertIsSatisfied(); }",
"ShiroSecurityToken shiroSecurityToken = new ShiroSecurityToken(\"ringo\", \"starr\"); template.sendBodyAndHeader(\"direct:secureEndpoint\", \"Beatle Mania\", ShiroSecurityConstants.SHIRO_SECURITY_TOKEN, shiroSecurityToken);",
"Map<String, Object> headers = new HashMap<String, Object>(); headers.put(ShiroSecurityConstants.SHIRO_SECURITY_USERNAME, \"ringo\"); headers.put(ShiroSecurityConstants.SHIRO_SECURITY_PASSWORD, \"starr\"); template.sendBodyAndHeaders(\"direct:secureEndpoint\", \"Beatle Mania\", headers);"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/shirosecurity-shirosecuritycomponent |
Chapter 27. GRPCPreferencesService | Chapter 27. GRPCPreferencesService 27.1. Get GET /v1/grpc-preferences 27.1.1. Description 27.1.2. Parameters 27.1.3. Return Type V1Preferences 27.1.4. Content Type application/json 27.1.5. Responses Table 27.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1Preferences 0 An unexpected error response. GooglerpcStatus 27.1.6. Samples 27.1.7. Common object reference 27.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 27.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 27.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 27.1.7.3. V1Preferences Field Name Required Nullable Type Description Format maxGrpcReceiveSizeBytes String uint64 | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/grpcpreferencesservice |
Chapter 150. KafkaNodePoolStatus schema reference | Chapter 150. KafkaNodePoolStatus schema reference Used in: KafkaNodePool Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. nodeIds integer array Node IDs used by Kafka nodes in this pool. clusterId string Kafka cluster ID. roles string (one or more of [controller, broker]) array Added in Streams for Apache Kafka 2.7. The roles currently assigned to this pool. replicas integer The current number of pods being used to provide this resource. labelSelector string Label selector for pods providing this resource. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaNodePoolStatus-reference |
7.5. Defining Audit Rules | 7.5. Defining Audit Rules The Audit system operates on a set of rules that define what is to be captured in the log files. The following types of Audit rules can be specified: Control rules Allow the Audit system's behavior and some of its configuration to be modified. File system rules Also known as file watches, allow the auditing of access to a particular file or a directory. System call rules Allow logging of system calls that any specified program makes. Audit rules can be set: on the command line using the auditctl utility. Note that these rules are not persistent across reboots. For details, see Section 7.5.1, "Defining Audit Rules with auditctl " in the /etc/audit/audit.rules file. For details, see Section 7.5.3, "Defining Persistent Audit Rules and Controls in the /etc/audit/audit.rules File" 7.5.1. Defining Audit Rules with auditctl The auditctl command allows you to control the basic functionality of the Audit system and to define rules that decide which Audit events are logged. Note All commands which interact with the Audit service and the Audit log files require root privileges. Ensure you execute these commands as the root user. Additionally, the CAP_AUDIT_CONTROL capability is required to set up audit services and the CAP_AUDIT_WRITE capabilityis required to log user messages. Defining Control Rules The following are some of the control rules that allow you to modify the behavior of the Audit system: -b sets the maximum amount of existing Audit buffers in the kernel, for example: -f sets the action that is performed when a critical error is detected, for example: The above configuration triggers a kernel panic in case of a critical error. -e enables and disables the Audit system or locks its configuration, for example: The above command locks the Audit configuration. -r sets the rate of generated messages per second, for example: The above configuration sets no rate limit on generated messages. -s reports the status of the Audit system, for example: -l lists all currently loaded Audit rules, for example: -D deletes all currently loaded Audit rules, for example: Defining File System Rules To define a file system rule, use the following syntax: where: path_to_file is the file or directory that is audited. permissions are the permissions that are logged: r - read access to a file or a directory. w - write access to a file or a directory. x - execute access to a file or a directory. a - change in the file's or directory's attribute. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.1. File System Rules To define a rule that logs all write access to, and every attribute change of, the /etc/passwd file, execute the following command: Note that the string following the -k option is arbitrary. To define a rule that logs all write access to, and every attribute change of, all the files in the /etc/selinux/ directory, execute the following command: To define a rule that logs the execution of the /sbin/insmod command, which inserts a module into the Linux kernel, execute the following command: Defining System Call Rules To define a system call rule, use the following syntax: where: action and filter specify when a certain event is logged. action can be either always or never . filter specifies which kernel rule-matching filter is applied to the event. The rule-matching filter can be one of the following: task , exit , user , and exclude . For more information about these filters, see the beginning of Section 7.1, "Audit System Architecture" . system_call specifies the system call by its name. A list of all system calls can be found in the /usr/include/asm/unistd_64.h file. Several system calls can be grouped into one rule, each specified after its own -S option. field = value specifies additional options that further modify the rule to match events based on a specified architecture, group ID, process ID, and others. For a full listing of all available field types and their values, see the auditctl (8) man page. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.2. System Call Rules To define a rule that creates a log entry every time the adjtimex or settimeofday system calls are used by a program, and the system uses the 64-bit architecture, use the following command: To define a rule that creates a log entry every time a file is deleted or renamed by a system user whose ID is 1000 or larger, use the following command: Note that the -F auid!=4294967295 option is used to exclude users whose login UID is not set. It is also possible to define a file system rule using the system call rule syntax. The following command creates a rule for system calls that is analogous to the -w /etc/shadow -p wa file system rule: 7.5.2. Defining Executable File Rules To define an executable file rule, use the following syntax: where: action and filter specify when a certain event is logged. action can be either always or never . filter specifies which kernel rule-matching filter is applied to the event. The rule-matching filter can be one of the following: task , exit , user , and exclude . For more information about these filters, see the beginning of Section 7.1, "Audit System Architecture" . system_call specifies the system call by its name. A list of all system calls can be found in the /usr/include/asm/unistd_64.h file. Several system calls can be grouped into one rule, each specified after its own -S option. path_to_executable_file is the absolute path to the executable file that is audited. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.3. Executable File Rules To define a rule that logs all execution of the /bin/id program, execute the following command: 7.5.3. Defining Persistent Audit Rules and Controls in the /etc/audit/audit.rules File To define Audit rules that are persistent across reboots, you must either directly include them in the /etc/audit/audit.rules file or use the augenrules program that reads rules located in the /etc/audit/rules.d/ directory. The /etc/audit/audit.rules file uses the same auditctl command line syntax to specify the rules. Empty lines and text following a hash sign ( # ) are ignored. The auditctl command can also be used to read rules from a specified file using the -R option, for example: Defining Control Rules A file can contain only the following control rules that modify the behavior of the Audit system: -b , -D , -e , -f , -r , --loginuid-immutable , and --backlog_wait_time . For more information on these options, see the section called "Defining Control Rules" . Example 7.4. Control Rules in audit.rules Defining File System and System Call Rules File system and system call rules are defined using the auditctl syntax. The examples in Section 7.5.1, "Defining Audit Rules with auditctl " can be represented with the following rules file: Example 7.5. File System and System Call Rules in audit.rules Preconfigured Rules Files In the /usr/share/doc/audit/rules/ directory, the audit package provides a set of pre-configured rules files according to various certification standards: 30-nispom.rules - Audit rule configuration that meets the requirements specified in the Information System Security chapter of the National Industrial Security Program Operating Manual. 30-pci-dss-v31.rules - Audit rule configuration that meets the requirements set by Payment Card Industry Data Security Standard (PCI DSS) v3.1. 30-stig.rules - Audit rule configuration that meets the requirements set by Security Technical Implementation Guides (STIG). To use these configuration files, create a backup of your original /etc/audit/audit.rules file and copy the configuration file of your choice over the /etc/audit/audit.rules file: Note The Audit rules have a numbering scheme that allows them to be ordered. To learn more about the naming scheme, see the /usr/share/doc/audit/rules/README-rules file. Using augenrules to Define Persistent Rules The augenrules script reads rules located in the /etc/audit/rules.d/ directory and compiles them into an audit.rules file. This script processes all files that ends in .rules in a specific order based on their natural sort order. The files in this directory are organized into groups with following meanings: 10 - Kernel and auditctl configuration 20 - Rules that could match general rules but you want a different match 30 - Main rules 40 - Optional rules 50 - Server-specific rules 70 - System local rules 90 - Finalize (immutable) The rules are not meant to be used all at once. They are pieces of a policy that should be thought out and individual files copied to /etc/audit/rules.d/ . For example, to set a system up in the STIG configuration, copy rules 10-base-config, 30-stig, 31-privileged, and 99-finalize. Once you have the rules in the /etc/audit/rules.d/ directory, load them by running the augenrules script with the --load directive: For more information on the Audit rules and the augenrules script, see the audit.rules(8) and augenrules(8) man pages. | [
"~]# auditctl -b 8192",
"~]# auditctl -f 2",
"~]# auditctl -e 2",
"~]# auditctl -r 0",
"~]# auditctl -s AUDIT_STATUS: enabled=1 flag=2 pid=0 rate_limit=0 backlog_limit=8192 lost=259 backlog=0",
"~]# auditctl -l -w /etc/passwd -p wa -k passwd_changes -w /etc/selinux -p wa -k selinux_changes -w /sbin/insmod -p x -k module_insertion ...",
"~]# auditctl -D No rules",
"auditctl -w path_to_file -p permissions -k key_name",
"~]# auditctl -w /etc/passwd -p wa -k passwd_changes",
"~]# auditctl -w /etc/selinux/ -p wa -k selinux_changes",
"~]# auditctl -w /sbin/insmod -p x -k module_insertion",
"auditctl -a action , filter -S system_call -F field = value -k key_name",
"~]# auditctl -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change",
"~]# auditctl -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete",
"~]# auditctl -a always,exit -F path=/etc/shadow -F perm=wa",
"auditctl -a action , filter [ -F arch=cpu -S system_call ] -F exe= path_to_executable_file -k key_name",
"~]# auditctl -a always,exit -F exe=/bin/id -F arch=b64 -S execve -k execution_bin_id",
"~]# auditctl -R /usr/share/doc/audit/rules/30-stig.rules",
"Delete all previous rules -D Set buffer size -b 8192 Make the configuration immutable -- reboot is required to change audit rules -e 2 Panic when a failure occurs -f 2 Generate at most 100 audit messages per second -r 100 Make login UID immutable once it is set (may break containers) --loginuid-immutable 1",
"-w /etc/passwd -p wa -k passwd_changes -w /etc/selinux/ -p wa -k selinux_changes -w /sbin/insmod -p x -k module_insertion -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete",
"~]# cp /etc/audit/audit.rules /etc/audit/audit.rules_backup ~]# cp /usr/share/doc/audit/rules/30-stig.rules /etc/audit/audit.rules",
"~]# augenrules --load augenrules --load No rules enabled 1 failure 1 pid 634 rate_limit 0 backlog_limit 8192 lost 0 backlog 0 enabled 1 failure 1 pid 634 rate_limit 0 backlog_limit 8192 lost 0 backlog 1"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-defining_audit_rules_and_controls |
Chapter 6. Configuring the overcloud to use an external load balancer | Chapter 6. Configuring the overcloud to use an external load balancer In Red Hat OpenStack Platform (RHOSP), the overcloud uses multiple Controller nodes together as a high availability cluster to ensure maximum operational performance for your OpenStack services. The cluster also provides load balancing for OpenStack services, which evenly distributes traffic to the Controller nodes and reduces server overload for each node. By default, the overcloud uses an open source tool called HAProxy to manage load balancing. HAProxy load balances traffic to the Controller nodes that run OpenStack services. The haproxy package contains the haproxy daemon that listens to incoming traffic, and includes logging features and sample configurations. The overcloud also uses the high availability resource manager Pacemaker to control HAProxy as a highly available service. This means that HAProxy runs on each Controller node and distributes traffic according to a set of rules that you define in each configuration. You can also use an external load balancer to perform this distribution. For example, your organization might use a dedicated hardware-based load balancer to handle traffic distribution to the Controller nodes. To define the configuration for an external load balancer and the overcloud creation, you perform the following processes: Install and configure an external load balancer. Configure and deploy the overcloud with heat template parameters to integrate the overcloud with the external load balancer. This requires the IP addresses of the load balancer and of the potential nodes. Before you configure your overcloud to use an external load balancer, ensure that you deploy and run high availability on the overcloud. 6.1. Preparing your environment for an external load balancer To prepare your environment for an external load balancer, first create a node definition template and register blank nodes with director. Then, inspect the hardware of all nodes and manually tag nodes into profiles. Use the following workflow to prepare your environment: Create a node definition template and register blank nodes with Red Hat OpenStack Platform director. A node definition template instackenv.json is a JSON format file and contains the hardware and power management details to register nodes. Inspect the hardware of all nodes. This ensures that all nodes are in a manageable state. Manually tag nodes into profiles. These profile tags match the nodes to flavors. The flavors are then assigned to a deployment role. Procedure Log in to the director host as the stack user and source the director credentials: Create a node definition template instackenv.json and copy and edit the following example based on your environment: Save the file to the home directory of the stack user, /home/stack/instackenv.json , then import it into director and register the nodes to the director: Assign the kernel and ramdisk images to all nodes: Inspect the hardware attributes of each node: Important The nodes must be in the manageable state. Ensure that this process runs to completion. This process usually takes 15 minutes for bare metal nodes. Get the list of your nodes to identify their UUIDs: Manually tag each node to a specific profile by adding a profile option in the properties/capabilities parameter for each node. For example, to tag three nodes to use a Controller profile and one node to use a Compute profile, use the following commands: The profile:compute and profile:control options tag the nodes into each respective profile. Additional resources Planning your overcloud 6.2. Configuring the overcloud network for an external load balancer To configure the network for the overcloud, isolate your services to use specific network traffic and then configure the network environment file for your local environment. This file is a heat environment file that describes the overcloud network environment, points to the network interface configuration templates, and defines the subnets and VLANs for your network and the IP address ranges. Procedure To configure the node interfaces for each role, customize the following network interface templates: To configure a single NIC with VLANs for each role, use the example templates in the following directory: To configure bonded NICs for each role, use the example templates in the following directory: Create a network environment file by copying the file from /home/stack/network-environment.yaml and editing the content based on your environment. Additional resources Basic network isolation Custom composable networks Custom network interface templates Overcloud networks 6.3. Creating an external load balancer environment file To deploy an overcloud with an external load balancer, create a new environment file with the required configuration. In this example file, several virtual IPs are configured on the external load balancer, one virtual IP on each isolated network, and one for the Redis service, before the overcloud deployment starts. Some of the virtual IPs can be identical if the overcloud node NICs configuration supports the configuration. Procedure Use the following example environment file external-lb.yaml to create the environment file, and edit the content based on your environment. Note The parameter_defaults section contains the VIP and IP assignments for each network. These settings must match the same IP configuration for each service on the load balancer. The parameter_defaults section defines an administrative password for the Redis service (RedisPassword) and contains the ServiceNetMap parameter, which maps each OpenStack service to a specific network. The load balancing configuration requires this services remap. 6.4. Configuring SSL for external load balancing To configure encrypted endpoints for the external load balancer, create additional environment files that enable SSL to access endpoints and then install a copy of your SSL certificate and key on your external load balancing server. By default, the overcloud uses unencrypted endpoints services. Prerequisites If you are using an IP address or domain name to access the public endpoints, choose one of the following environment files to include in your overcloud deployment: To access the public endpoints with a domain name service (DNS), use the file /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-dns.yaml . To access the public endpoints with an IP address, use the file /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-ip.yaml . Procedure If you use a self-signed certificate or if the certificate signer is not in the default trust store on the overcloud image, inject the certificate into the overcloud image by copying the inject-trust-anchor.yaml environment file from the heat template collection: Open the file in a text editor and copy the contents of the root certificate authority file to the SSLRootCertificate parameter: Important The certificate authority content requires the same indentation level for all new lines. Change the resource URL for the OS::TripleO::NodeTLSCAData: parameter to an absolute URL: Optional: If you use a DNS hostname to access the overcloud through SSL/TLS, create a new environment file ~/templates/cloudname.yaml and define the hostname of the overcloud endpoints: Replace overcloud.example.com with the DNS hostname for the overcloud endpoints. 6.5. Deploying the overcloud with an external load balancer To deploy an overcloud that uses an external load balancer, run the openstack overcloud deploy and include the additional environment files and configuration files for the external load balancer. Prerequisites Environment is prepared for an external load balancer. For more information about how to prepare your environment, see Section 6.1, "Preparing your environment for an external load balancer" Overcloud network is configured for an external load balancer. For information about how to configure your network, see Section 6.2, "Configuring the overcloud network for an external load balancer" External load balancer environment file is prepared. For information about how to create the environment file, see Section 6.3, "Creating an external load balancer environment file" SSL is configured for external load balancing. For information about how to configure SSL for external load balancing, see Section 6.4, "Configuring SSL for external load balancing" Procedure Deploy the overcloud with all the environment and configuration files for an external load balancer: Replace the values in angle brackets <> with the file paths you defined for your environment. Important You must add the network environment files to the command in the order listed in this example. This command includes the following environment files: network-isolation.yaml : Network isolation configuration file. network-environment.yaml : Network configuration file. external-loadbalancer-vip.yaml : External load balancing virtual IP addresses configuration file. external-lb.yaml : External load balancer configuration file. You can also set the following options for this file and adjust the values for your environment: --control-scale 3 : Scale the Controller nodes to three. --compute-scale 3 : Scale the Compute nodes to three. --control-flavor control : Use a specific flavor for the Controller nodes. --compute-flavor compute : Use a specific flavor for the Compute nodes. SSL/TLS environment files: SSL/TLS endpoint environment file : Environment file that defines how to connect to public endpoinst. Use tls-endpoints-public-dns.yaml or tls-endpoints-public-ip.yaml . (Optional) DNS hostname environment file : The environment file to set the DNS hostname. Root certificate injection environment file : The environment file to inject the root certificate authority. During the overcloud deployment process, Red Hat OpenStack Platform director provisions your nodes. This process takes some time to complete. To view the status of the overcloud deployment, enter the following commands: 6.6. Additional resources Load balancing traffic with HAProxy . | [
"source ~/stackrc",
"{ \"nodes\":[ { \"mac\":[ \"bb:bb:bb:bb:bb:bb\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.205\" }, { \"mac\":[ \"cc:cc:cc:cc:cc:cc\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.206\" }, { \"mac\":[ \"dd:dd:dd:dd:dd:dd\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.207\" }, { \"mac\":[ \"ee:ee:ee:ee:ee:ee\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.208\" } ] }",
"openstack overcloud node import ~/instackenv.json",
"openstack overcloud node configure",
"openstack overcloud node introspect --all-manageable",
"openstack baremetal node list",
"openstack baremetal node set 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 --property capabilities='profile:control,boot_option:local' openstack baremetal node set 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a --property capabilities='profile:control,boot_option:local' openstack baremetal node set 5e3b2f50-fcd9-4404-b0a2-59d79924b38e --property capabilities='profile:control,boot_option:local' openstack baremetal node set 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13 --property capabilities='profile:compute,boot_option:local'",
"/usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans",
"/usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans",
"parameter_defaults: ControlFixedIPs: [{'ip_address':'192.0.2.250'}] PublicVirtualFixedIPs: [{'ip_address':'172.16.23.250'}] InternalApiVirtualFixedIPs: [{'ip_address':'172.16.20.250'}] StorageVirtualFixedIPs: [{'ip_address':'172.16.21.250'}] StorageMgmtVirtualFixedIPs: [{'ip_address':'172.16.19.250'}] RedisVirtualFixedIPs: [{'ip_address':'172.16.20.249'}] # IPs assignments for the Overcloud Controller nodes. Ensure these IPs are from each respective allocation pools defined in the network environment file. ControllerIPs: external: - 172.16.23.150 - 172.16.23.151 - 172.16.23.152 internal_api: - 172.16.20.150 - 172.16.20.151 - 172.16.20.152 storage: - 172.16.21.150 - 172.16.21.151 - 172.16.21.152 storage_mgmt: - 172.16.19.150 - 172.16.19.151 - 172.16.19.152 tenant: - 172.16.22.150 - 172.16.22.151 - 172.16.22.152 # CIDRs external_cidr: \"24\" internal_api_cidr: \"24\" storage_cidr: \"24\" storage_mgmt_cidr: \"24\" tenant_cidr: \"24\" RedisPassword: p@55w0rd! ServiceNetMap: NeutronTenantNetwork: tenant CeilometerApiNetwork: internal_api AodhApiNetwork: internal_api GnocchiApiNetwork: internal_api MongoDbNetwork: internal_api CinderApiNetwork: internal_api CinderIscsiNetwork: storage GlanceApiNetwork: storage GlanceRegistryNetwork: internal_api KeystoneAdminApiNetwork: internal_api KeystonePublicApiNetwork: internal_api NeutronApiNetwork: internal_api HeatApiNetwork: internal_api NovaApiNetwork: internal_api NovaMetadataNetwork: internal_api NovaVncProxyNetwork: internal_api SwiftMgmtNetwork: storage_mgmt SwiftProxyNetwork: storage HorizonNetwork: internal_api MemcachedNetwork: internal_api RabbitMqNetwork: internal_api RedisNetwork: internal_api MysqlNetwork: internal_api CephClusterNetwork: storage_mgmt CephPublicNetwork: storage ControllerHostnameResolveNetwork: internal_api ComputeHostnameResolveNetwork: internal_api BlockStorageHostnameResolveNetwork: internal_api ObjectStorageHostnameResolveNetwork: internal_api CephStorageHostnameResolveNetwork: storage",
"cp -r /usr/share/openstack-tripleo-heat-templates/environments/inject-trust-anchor.yaml ~/templates/",
"parameter_defaults: SSLRootCertificate: | -----BEGIN CERTIFICATE----- MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQEBCwUAMFgxCzAJBgNV sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvp -----END CERTIFICATE-----",
"resource_registry: OS::TripleO::NodeTLSCAData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/ca-inject.yaml",
"parameter_defaults: CloudName: overcloud.example.com",
"openstack overcloud deploy --templates / -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml / -e ~/network-environment.yaml / -e /usr/share/openstack-tripleo-heat-templates/environments/external-loadbalancer-vip.yaml / -e ~/external-lb.yaml --control-scale 3 --compute-scale 1 --control-flavor control --compute-flavor compute / -e <SSL/TLS endpoint environment file> / -e <DNS hostname environment file> / -e <root certificate injection environment file> / -e <additional_options_if_needed>",
"source ~/stackrc openstack stack list --nested"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_high_availability_services/assembly_configuring-overcloud-to-use-external-load-balancer_rhosp |
Chapter 5. Kafka producer configuration tuning | Chapter 5. Kafka producer configuration tuning Use configuration properties to optimize the performance of Kafka producers. You can use standard Kafka producer configuration options. Adjusting your configuration to maximize throughput might increase latency or vice versa. You will need to experiment and tune your producer configuration to get the balance you need. When configuring a producer, consider the following aspects carefully, as they significantly impact its performance and behavior: Compression By compressing messages before they are sent over the network, you can conserve network bandwidth and reduce disk storage requirements, but with the additional cost of increased CPU utilization due to the compression and decompression processes. Batching Adjusting the batch size and time intervals when the producer sends messages can affect throughput and latency. Partitioning Partitioning strategies in the Kafka cluster can support producers through parallelism and load balancing, whereby producers can write to multiple partitions concurrently and each partition receives an equal share of messages. Other strategies might include topic replication for fault tolerance. Securing access Implement security measures for authentication, encryption, and authorization by setting up user accounts to manage secure access to Kafka . 5.1. Basic producer configuration Connection and serializer properties are required for every producer. Generally, it is good practice to add a client id for tracking, and use compression on the producer to reduce batch sizes in requests. In a basic producer configuration: The order of messages in a partition is not guaranteed. The acknowledgment of messages reaching the broker does not guarantee durability. Basic producer configuration properties # ... bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5 # ... 1 (Required) Tells the producer to connect to a Kafka cluster using a host:port bootstrap server address for a Kafka broker. The producer uses the address to discover and connect to all brokers in the cluster. Use a comma-separated list to specify two or three addresses in case a server is down, but it's not necessary to provide a list of all the brokers in the cluster. 2 (Required) Serializer to transform the key of each message to bytes prior to them being sent to a broker. 3 (Required) Serializer to transform the value of each message to bytes prior to them being sent to a broker. 4 (Optional) The logical name for the client, which is used in logs and metrics to identify the source of a request. 5 (Optional) The codec for compressing messages, which are sent and might be stored in compressed format and then decompressed when reaching a consumer. Compression is useful for improving throughput and reducing the load on storage, but might not be suitable for low latency applications where the cost of compression or decompression could be prohibitive. 5.2. Data durability Message delivery acknowledgments minimize the likelihood that messages are lost. By default, acknowledgments are enabled with the acks property set to acks=all . To control the maximum time the producer waits for acknowledgments from the broker and handle potential delays in sending messages, you can use the delivery.timeout.ms property. Acknowledging message delivery # ... acks=all 1 delivery.timeout.ms=120000 2 # ... 1 acks=all forces a leader replica to replicate messages to a certain number of followers before acknowledging that the message request was successfully received. 2 The maximum time in milliseconds to wait for a complete send request. You can set the value to MAX_LONG to delegate to Kafka an indefinite number of retries. The default is 120000 or 2 minutes. The acks=all setting offers the strongest guarantee of delivery, but it will increase the latency between the producer sending a message and receiving acknowledgment. If you don't require such strong guarantees, a setting of acks=0 or acks=1 provides either no delivery guarantees or only acknowledgment that the leader replica has written the record to its log. With acks=all , the leader waits for all in-sync replicas to acknowledge message delivery. A topic's min.insync.replicas configuration sets the minimum required number of in-sync replica acknowledgements. The number of acknowledgements include that of the leader and followers. A typical starting point is to use the following configuration: Producer configuration: acks=all (default) Broker configuration for topic replication: default.replication.factor=3 (default = 1 ) min.insync.replicas=2 (default = 1 ) When you create a topic, you can override the default replication factor. You can also override min.insync.replicas at the topic level in the topic configuration. Streams for Apache Kafka uses this configuration in the example configuration files for multi-node deployment of Kafka. The following table describes how this configuration operates depending on the availability of followers that replicate the leader replica. Table 5.1. Follower availability Number of followers available and in-sync Acknowledgements Producer can send messages? 2 The leader waits for 2 follower acknowledgements Yes 1 The leader waits for 1 follower acknowledgement Yes 0 The leader raises an exception No A topic replication factor of 3 creates one leader replica and two followers. In this configuration, the producer can continue if a single follower is unavailable. Some delay can occur whilst removing a failed broker from the in-sync replicas or a creating a new leader. If the second follower is also unavailable, message delivery will not be successful. Instead of acknowledging successful message delivery, the leader sends an error ( not enough replicas ) to the producer. The producer raises an equivalent exception. With retries configuration, the producer can resend the failed message request. Note If the system fails, there is a risk of unsent data in the buffer being lost. 5.3. Ordered delivery Idempotent producers avoid duplicates as messages are delivered exactly once. IDs and sequence numbers are assigned to messages to ensure the order of delivery, even in the event of failure. If you are using acks=all for data consistency, using idempotency makes sense for ordered delivery. Idempotency is enabled for producers by default. With idempotency enabled, you can set the number of concurrent in-flight requests to a maximum of 5 for message ordering to be preserved. Ordered delivery with idempotency # ... enable.idempotence=true 1 max.in.flight.requests.per.connection=5 2 acks=all 3 retries=2147483647 4 # ... 1 Set to true to enable the idempotent producer. 2 With idempotent delivery the number of in-flight requests may be greater than 1 while still providing the message ordering guarantee. The default is 5 in-flight requests. 3 Set acks to all . 4 Set the number of attempts to resend a failed message request. If you choose not to use acks=all and disable idempotency because of the performance cost, set the number of in-flight (unacknowledged) requests to 1 to preserve ordering. Otherwise, a situation is possible where Message-A fails only to succeed after Message-B was already written to the broker. Ordered delivery without idempotency # ... enable.idempotence=false 1 max.in.flight.requests.per.connection=1 2 retries=2147483647 # ... 1 Set to false to disable the idempotent producer. 2 Set the number of in-flight requests to exactly 1 . 5.4. Reliability guarantees Idempotence is useful for exactly once writes to a single partition. Transactions, when used with idempotence, allow exactly once writes across multiple partitions. Transactions guarantee that messages using the same transactional ID are produced once, and either all are successfully written to the respective logs or none of them are. # ... enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2 # ... 1 Specify a unique transactional ID. 2 Set the maximum allowed time for transactions in milliseconds before a timeout error is returned. The default is 900000 or 15 minutes. The choice of transactional.id is important in order that the transactional guarantee is maintained. Each transactional id should be used for a unique set of topic partitions. For example, this can be achieved using an external mapping of topic partition names to transactional ids, or by computing the transactional id from the topic partition names using a function that avoids collisions. 5.5. Optimizing producers for throughput and latency Usually, the requirement of a system is to satisfy a particular throughput target for a proportion of messages within a given latency. For example, targeting 500,000 messages per second with 95% of messages being acknowledged within 2 seconds. It's likely that the messaging semantics (message ordering and durability) of your producer are defined by the requirements for your application. For instance, it's possible that you don't have the option of using acks=0 or acks=1 without breaking some important property or guarantee provided by your application. Broker restarts have a significant impact on high percentile statistics. For example, over a long period the 99th percentile latency is dominated by behavior around broker restarts. This is worth considering when designing benchmarks or comparing performance numbers from benchmarking with performance numbers seen in production. Depending on your objective, Kafka offers a number of configuration parameters and techniques for tuning producer performance for throughput and latency. Message batching ( linger.ms and batch.size ) Message batching delays sending messages in the hope that more messages destined for the same broker will be sent, allowing them to be batched into a single produce request. Batching is a compromise between higher latency in return for higher throughput. Time-based batching is configured using linger.ms , and size-based batching is configured using batch.size . Compression ( compression.type ) Message compression adds latency in the producer (CPU time spent compressing the messages), but makes requests (and potentially disk writes) smaller, which can increase throughput. Whether compression is worthwhile, and the best compression to use, will depend on the messages being sent. Compression happens on the thread which calls KafkaProducer.send() , so if the latency of this method matters for your application you should consider using more threads. Pipelining ( max.in.flight.requests.per.connection ) Pipelining means sending more requests before the response to a request has been received. In general more pipelining means better throughput, up to a threshold at which other effects, such as worse batching, start to counteract the effect on throughput. Lowering latency When your application calls the KafkaProducer.send() method, messages undergo a series of operations before being sent: Interception: Messages are processed by any configured interceptors. Serialization: Messages are serialized into the appropriate format. Partition assignment: Each message is assigned to a specific partition. Compression: Messages are compressed to conserve network bandwidth. Batching: Compressed messages are added to a batch in a partition-specific queue. During these operations, the send() method is momentarily blocked. It also remains blocked if the buffer.memory is full or if metadata is unavailable. Batches will remain in the queue until one of the following occurs: The batch is full (according to batch.size ). The delay introduced by linger.ms has passed. The sender is ready to dispatch batches for other partitions to the same broker and can include this batch. The producer is being flushed or closed. To minimize the impact of send() blocking on latency, optimize batching and buffering configurations. Use the linger.ms and batch.size properties to batch more messages into a single produce request for higher throughput. # ... linger.ms=100 1 batch.size=16384 2 buffer.memory=33554432 3 # ... 1 The linger.ms property adds a delay in milliseconds so that larger batches of messages are accumulated and sent in a request. The default is 0 . 2 If a maximum batch.size in bytes is used, a request is sent when the maximum is reached, or messages have been queued for longer than linger.ms (whichever comes sooner). Adding the delay allows batches to accumulate messages up to the batch size. 3 The buffer size must be at least as big as the batch size, and be able to accommodate buffering, compression, and in-flight requests. Increasing throughput You can improve throughput of your message requests by directing messages to a specified partition using a custom partitioner to replace the default. # ... partitioner.class=my-custom-partitioner 1 # ... 1 Specify the class name of your custom partitioner. | [
"bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5",
"acks=all 1 delivery.timeout.ms=120000 2",
"enable.idempotence=true 1 max.in.flight.requests.per.connection=5 2 acks=all 3 retries=2147483647 4",
"enable.idempotence=false 1 max.in.flight.requests.per.connection=1 2 retries=2147483647",
"enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2",
"linger.ms=100 1 batch.size=16384 2 buffer.memory=33554432 3",
"partitioner.class=my-custom-partitioner 1"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_tuning/con-producer-config-properties-str |
Managing content | Managing content Red Hat Satellite 6.16 Import content from Red Hat and custom sources, manage applications lifecycle across lifecycle environments, filter content by using Content Views, synchronize content between Satellite Servers, and more Red Hat Satellite Documentation Team [email protected] | [
"scp ~/ manifest_file .zip root@ satellite.example.com :~/.",
"hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \"",
"tree -d -L 11 └── content 1 ├── beta 2 │ └── rhel 3 │ └── server 4 │ └── 7 5 │ └── x86_64 6 │ └── sat-tools 7 └── dist └── rhel └── server └── 7 ├── 7.2 │ └── x86_64 │ └── kickstart └── 7Server └── x86_64 └── os",
"hammer alternate-content-source create --alternate-content-source-type custom --base-url \" https://local-repo.example.com:port \" --name \" My_ACS_Name \" --smart-proxy-ids My_Capsule_ID",
"hammer alternate-content-source list",
"hammer alternate-content-source refresh --id My_Alternate_Content_Source_ID",
"hammer alternate-content-source update --id My_Alternate_Content_Source_ID --smart-proxy-ids My_Capsule_ID",
"hammer alternate-content-source refresh --id My_Alternate_Content_Source_ID",
"hammer alternate-content-source create --alternate-content-source-type simplified --name My_ACS_Name --product-ids My_Product_ID --smart-proxy-ids My_Capsule_ID",
"hammer alternate-content-source list",
"hammer alternate-content-source refresh --id My_ACS_ID",
"rhui-manager repo info --repo_id My_Repo_ID",
"hammer alternate-content-source create --alternate-content-source-type rhui --base-url \" https://rhui-cds-node/pulp/content \" --name \" My_ACS_Name \" --smart-proxy-ids My_Capsule_ID --ssl-client-cert-id My_SSL_Client_Certificate_ID --ssl-client-key-id My_SSL_Client_Key_ID --subpaths path/to/repo/1/,path/to/repo/2/ --verify-ssl 1",
"hammer alternate-content-source list",
"hammer alternate-content-source refresh --id My_Alternate_Content_Source_ID",
"hammer alternate-content-source update --id My_Alternate_Content_Source_ID --smart-proxy-ids My_Capsule_ID",
"hammer alternate-content-source refresh --id My_Alternate_Content_Source_ID",
"scp My_SSL_Certificate [email protected]:~/.",
"wget -P ~ http:// upstream-satellite.example.com /pub/katello-server-ca.crt",
"hammer content-credential create --content-type cert --name \" My_SSL_Certificate \" --organization \" My_Organization \" --path ~/ My_SSL_Certificate",
"hammer product create --name \" My_Product \" --sync-plan \" Example Plan \" --description \" Content from My Repositories \" --organization \" My_Organization \"",
"hammer repository create --arch \" My_Architecture \" --content-type \"yum\" --gpg-key-id My_GPG_Key_ID --name \" My_Repository \" --organization \" My_Organization \" --os-version \" My_Operating_System_Version \" --product \" My_Product \" --publish-via-http true --url My_Upstream_URL",
"hammer product list --organization \" My_Organization \"",
"hammer repository-set list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"",
"hammer repository-set enable --name \"Red Hat Enterprise Linux 7 Server (RPMs)\" --releasever \"7Server\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"",
"hammer product synchronize --name \" My_Product \" --organization \" My_Organization \"",
"hammer repository synchronize --name \" My_Repository \" --organization \" My_Organization \" --product \" My Product \"",
"ORG=\" My_Organization \" for i in USD(hammer --no-headers --csv repository list --organization USDORG --fields Id) do hammer repository synchronize --id USD{i} --organization USDORG --async done",
"hammer settings set --name default_redhat_download_policy --value immediate",
"hammer settings set --name default_download_policy --value immediate",
"hammer repository list --organization-label My_Organization_Label",
"hammer repository update --download-policy immediate --name \" My_Repository \" --organization-label My_Organization_Label --product \" My_Product \"",
"hammer repository list --organization-label My_Organization_Label",
"hammer repository update --id 1 --mirroring-policy mirror_complete",
"hammer repository upload-content --id My_Repository_ID --path /path/to/example-package.rpm",
"hammer repository upload-content --content-type srpm --id My_Repository_ID --path /path/to/example-package.src.rpm",
"semanage port -l | grep ^http_port_t http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000",
"semanage port -a -t http_port_t -p tcp 10011",
"hammer repository list --organization \" My_Organization \"",
"hammer repository synchronize --id My_ID",
"hammer repository synchronize --id My_ID --skip-metadata-check true",
"hammer repository synchronize --id My_ID --validate-contents true",
"hammer capsule content verify-checksum --id My_Capsule_ID --organization-id 1 --content-view-id 3",
"hammer capsule content verify-checksum --id My_Capsule_ID --organization-id 1 --lifecycle-environment-id 1",
"hammer capsule content verify-checksum --id My_Capsule_ID --organization-id 1 --repository-id 1",
"hammer capsule content verify-checksum --id My_Capsule_ID",
"hammer http-proxy create --name My_HTTP_Proxy --url http-proxy.example.com:8080",
"hammer repository update --http-proxy-policy HTTP_Proxy_Policy --id Repository_ID",
"hammer sync-plan create --description \" My_Description \" --enabled true --interval daily --name \" My_Products \" --organization \" My_Organization \" --sync-date \"2023-01-01 01:00:00\"",
"hammer sync-plan list --organization \" My_Organization \"",
"hammer product set-sync-plan --name \" My_Product_Name \" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan_Name \"",
"ORG=\" My_Organization \" SYNC_PLAN=\"daily_sync_at_3_a.m\" hammer sync-plan create --name USDSYNC_PLAN --interval daily --sync-date \"2023-04-5 03:00:00\" --enabled true --organization USDORG for i in USD(hammer --no-headers --csv --csv-separator=\"|\" product list --organization USDORG --per-page 999 | grep -vi not_synced | awk -F'|' 'USD5 != \"0\" { print USD1}') do hammer product set-sync-plan --sync-plan USDSYNC_PLAN --organization USDORG --id USDi done",
"hammer product list --organization USDORG --sync-plan USDSYNC_PLAN",
"hammer repository update --download-concurrency 5 --id Repository_ID --organization \" My_Organization \"",
"wget http://www.example.com/9.5/example-9.5-2.noarch.rpm",
"rpm2cpio example-9.5-2.noarch.rpm | cpio -idmv",
"-----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFy/HE4BEADttv2TCPzVrre+aJ9f5QsR6oWZMm7N5Lwxjm5x5zA9BLiPPGFN 4aTUR/g+K1S0aqCU+ZS3Rnxb+6fnBxD+COH9kMqXHi3M5UNzbp5WhCdUpISXjjpU XIFFWBPuBfyr/FKRknFH15P+9kLZLxCpVZZLsweLWCuw+JKCMmnA =F6VG -----END PGP PUBLIC KEY BLOCK----- -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFw467UBEACmREzDeK/kuScCmfJfHJa0Wgh/2fbJLLt3KSvsgDhORIptf+PP OTFDlKuLkJx99ZYG5xMnBG47C7ByoMec1j94YeXczuBbynOyyPlvduma/zf8oB9e Wl5GnzcLGAnUSRamfqGUWcyMMinHHIKIc1X1P4I= =WPpI -----END PGP PUBLIC KEY BLOCK-----",
"scp ~/etc/pki/rpm-gpg/RPM-GPG-KEY- EXAMPLE-95 [email protected]:~/.",
"hammer content-credentials create --content-type gpg_key --name \" My_GPG_Key \" --organization \" My_Organization \" --path ~/RPM-GPG-KEY- EXAMPLE-95",
"hammer lifecycle-environment create --name \" Environment Path Name \" --description \" Environment Path Description \" --prior \"Library\" --organization \" My_Organization \"",
"hammer lifecycle-environment create --name \" Environment Name \" --description \" Environment Description \" --prior \" Prior Environment Name \" --organization \" My_Organization \"",
"hammer lifecycle-environment paths --organization \" My_Organization \"",
"hammer capsule list",
"hammer capsule info --id My_capsule_ID",
"hammer capsule content available-lifecycle-environments --id My_capsule_ID",
"hammer capsule content add-lifecycle-environment --id My_capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID --organization \" My_Organization \"",
"hammer capsule content synchronize --id My_capsule_ID",
"hammer capsule content synchronize --id My_capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID",
"hammer capsule content synchronize --id My_capsule_ID --skip-metadata-check true",
"hammer lifecycle-environment list --organization \" My_Organization \"",
"hammer lifecycle-environment delete --name \" My_Environment \" --organization \" My_Organization \"",
"hammer capsule list",
"hammer capsule info --id My_Capsule_ID",
"hammer capsule content lifecycle-environments --id My_Capsule_ID",
"hammer capsule content remove-lifecycle-environment --id My_Capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID",
"hammer capsule content synchronize --id My_Capsule_ID",
"hammer repository list --organization \" My_Organization \"",
"hammer content-view create --description \" My_Content_View \" --name \" My_Content_View \" --organization \" My_Organization \" --repository-ids 1,2",
"hammer content-view publish --description \" My_Content_View \" --name \" My_Content_View \" --organization \" My_Organization \"",
"hammer content-view add-repository --name \" My_Content_View \" --organization \" My_Organization \" --repository-id repository_ID",
"hammer content-view copy --name My_original_CV_name --new-name My_new_CV_name",
"hammer content-view copy --id=5 --new-name=\"mixed_copy\" Content view copied.",
"hammer capsule content synchronize --content-view \"my content view name\"",
"hammer organization list",
"hammer module-stream list --organization-id My_Organization_ID",
"hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"",
"ORG=\" My_Organization \" CVV_ID= My_Content_View_Version_ID for i in USD(hammer --no-headers --csv lifecycle-environment list --organization USDORG | awk -F, {'print USD1'} | sort -n) do hammer content-view version promote --organization USDORG --to-lifecycle-environment-id USDi --id USDCVV_ID done",
"hammer content-view version info --id My_Content_View_Version_ID",
"hammer content-view version list --organization \" My_Organization \"",
"hammer content-view create --composite --auto-publish yes --name \" Example_Composite_Content_View \" --description \"Example composite content view\" --organization \" My_Organization \"",
"hammer content-view component add --component-content-view-id Content_View_ID --composite-content-view \" Example_Composite_Content_View \" --latest --organization \" My_Organization \"",
"hammer content-view component add --component-content-view-id Content_View_ID --composite-content-view \" Example_Composite_Content_View \" --component-content-view-version-id Content_View_Version_ID --organization \" My_Organization \"",
"hammer content-view publish --name \" Example_Composite_Content_View \" --description \"Initial version of composite content view\" --organization \" My_Organization \"",
"hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"",
"hammer content-view filter create --name \" Errata Filter \" --type erratum --content-view \" Example_Content_View \" --description \" My latest filter \" --inclusion false --organization \" My_Organization \"",
"hammer content-view filter rule create --content-view \" Example_Content_View \" --content-view-filter \" Errata Filter \" --start-date \" YYYY-MM-DD \" --types enhancement,bugfix --date-type updated --organization \" My_Organization \"",
"hammer content-view publish --name \" Example_Content_View \" --description \"Adding errata filter\" --organization \" My_Organization \"",
"hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"",
"curl http://satellite.example.com/pub/katello-server-ca.crt",
"hammer content-export complete library --organization=\" My_Organization \"",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export-Library/1.0/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 03:35 metadata.json",
"hammer content-export complete library --chunk-size-gb=2 --organization=\" My_Organization \" Generated /var/lib/pulp/exports/ My_Organization /Export-Library/2.0/2021-03-02T04-01-25-00-00/metadata.json ls -lh /var/lib/pulp/exports/ My_Organization /Export-Library/2.0/2021-03-02T04-01-25-00-00/",
"hammer content-export complete library --organization=\" My_Organization \" --format=syncable",
"du -sh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2021-03-02T03-35-24-00-00",
"hammer content-import library --organization=\" My_Organization \" --path=\" My_Path_To_Syncable_Export \"",
"hammer content-export incremental library --organization=\" My_Organization \"",
"find /var/lib/pulp/exports/ My_Organization /Export-Library/",
"hammer content-view version list --content-view=\" My_Content_View \" --organization=\" My_Organization \" ---|----------|---------|-------------|----------------------- ID | NAME | VERSION | DESCRIPTION | LIFECYCLE ENVIRONMENTS ---|----------|---------|-------------|----------------------- 5 | view 3.0 | 3.0 | | Library 4 | view 2.0 | 2.0 | | 3 | view 1.0 | 1.0 | | ---|----------|---------|-------------|----------------------",
"hammer content-export complete version --content-view=\" Content_View_Name \" --version=1.0 --organization=\" My_Organization \"",
"ls -lh /var/lib/pulp/exports/ My_Organization / Content_View_Name /1.0/2021-02-25T18-59-26-00-00/",
"hammer content-export complete version --chunk-size-gb=2 --content-view=\" Content_View_Name \" --organization=\" My_Organization \" --version=1.0 ls -lh /var/lib/pulp/exports/ My_Organization /view/1.0/2021-02-25T21-15-22-00-00/",
"hammer content-view version list --content-view=\" My_Content_View \" --organization=\" My_Organization \"",
"hammer content-export complete version --content-view=\" Content_View_Name \" --version=1.0 --organization=\" My_Organization \" --format=syncable",
"ls -lh /var/lib/pulp/exports/ My_Organization / My_Content_View_Name /1.0/2021-02-25T18-59-26-00-00/",
"hammer content-export incremental version --content-view=\" My_Content_View \" --organization=\" My_Organization \" --version=\" My_Content_View_Version \"",
"find /var/lib/pulp/exports/ My_Organization / My_Exported_Content_View / My_Content_View_Version /",
"hammer content-export complete repository --name=\" My_Repository \" --product=\" My_Product \" --organization=\" My_Organization \"",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2022-09-02T03-35-24-00-00/",
"hammer content-export complete repository --organization=\" My_Organization \" --product=\" My_Product \" --name=\" My_Repository \" --format=syncable",
"du -sh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2021-03-02T03-35-24-00-00",
"hammer content-export incremental repository --name=\" My_Repository \" --organization=\" My_Organization \" --product=\" My_Product \"",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /3.0/2021-03-02T03-35-24-00-00/ total 172K -rw-r--r--. 1 pulp pulp 20M Mar 2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422-toc.json -rw-r--r--. 1 root root 492 Mar 2 04:22 metadata.json",
"hammer content-export incremental repository --format=syncable --name=\" My_Repository \" --organization=\" My_Organization \" --product=\" My_Product \"",
"find /var/lib/pulp/exports/Default_Organization/ My_Product /2.0/2023-03-09T10-55-48-05-00/ -name \"*.rpm\"",
"hammer content-export complete library --destination-server= My_Downstream_Server_1 --organization=\" My_Organization \" --version=1.0",
"hammer content-export complete version --content-view=\" Content_View_Name \" --destination-server= My_Downstream_Server_1 --organization=\" My_Organization \" --version=1.0",
"hammer content-export list --organization=\" My_Organization \"",
"chown -R pulp:pulp /var/lib/pulp/imports/2021-03-02T03-35-24-00-00",
"ls -lh /var/lib/pulp/imports/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 04:29 metadata.json",
"hammer content-import library --organization=\" My_Organization \" --path=/var/lib/pulp/imports/2021-03-02T03-35-24-00-00",
"hammer content-import library --organization=\" My_Organization \" --path=http:// server.example.com /pub/exports/2021-02-25T21-15-22-00-00/",
"chown -R pulp:pulp /var/lib/pulp/imports/2021-02-25T21-15-22-00-00/",
"ls -lh /var/lib/pulp/imports/2021-02-25T21-15-22-00-00/",
"hammer content-import version --organization= My_Organization --path=/var/lib/pulp/imports/2021-02-25T21-15-22-00-00/",
"hammer content-view version list --organization-id= My_Organization_ID",
"hammer content-import version --organization= My_Organization --path=http:// server.example.com /pub/exports/2021-02-25T21-15-22-00-00/",
"chown -R pulp:pulp /var/lib/pulp/imports/2021-03-02T03-35-24-00-00",
"ls -lh /var/lib/pulp/imports/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 04:29 metadata.json",
"hammer content-import repository --organization=\" My_Organization \" --path=/var/lib/pulp/imports/ 2021-03-02T03-35-24-00-00",
"hammer content-import repository --organization=\" My_Organization \" --path=http:// server.example.com /pub/exports/2021-02-25T21-15-22-00-00/",
"hammer hostgroup set-parameter --hostgroup \" My_Host_Group \" --name \" My_Activation_Key \" --value \" name_of_first_key \", \" name_of_second_key \",",
"hammer activation-key create --name \" My_Activation_Key \" --unlimited-hosts --description \" Example Stack in the Development Environment \" --lifecycle-environment \" Development \" --content-view \" Stack \" --organization \" My_Organization \"",
"hammer activation-key update --organization \" My_Organization \" --name \" My_Activation_Key \" --service-level \" Standard \" --purpose-usage \" Development/Test \" --purpose-role \" Red Hat Enterprise Linux Server \" --purpose-addons \" addons \"",
"hammer activation-key product-content --content-access-mode-all true --name \" My_Activation_Key \" --organization \" My_Organization \"",
"hammer activation-key content-override --name \" My_Activation_Key \" --content-label rhel-7-server-satellite-client-6-rpms --value 1 --organization \" My_Organization \"",
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \"",
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \" --insecure true",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"] }}'",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"], \"insecure\": true }}'",
"hammer activation-key update --name \" My_Activation_Key \" --organization \" My_Organization \" --service-level premium",
"parameter operator value",
"type = security and package_name = kernel",
"hammer erratum list",
"hammer erratum info --id erratum_ID",
"hammer erratum list --product-id 7 --search \"bug = 1213000 or bug = 1207972\" --errata-restrict-applicable 1 --order \"type desc\"",
"hammer template create --file \"~/ My_Snippet \" --locations \" My_Location \" --name \" My_Template_Name_custom_pre\" \\ --organizations \"_My_Organization \" --type snippet",
"hammer content-view filter create --content-view \" My_Content_View \" --description \"Exclude errata items from the YYYY-MM-DD \" --name \" My_Filter_Name \" --organization \" My_Organization \" --type \"erratum\"",
"hammer content-view filter rule create --content-view \" My_Content_View \" --content-view-filter=\" My_Content_View_Filter \" --organization \" My_Organization \" --start-date \" YYYY-MM-DD \" --types=security,enhancement,bugfix",
"hammer content-view publish --name \" My_Content_View \" --organization \" My_Organization \"",
"hammer content-view version promote --content-view \" My_Content_View \" --organization \" My_Organization \" --to-lifecycle-environment \" My_Lifecycle_Environment \"",
"hammer erratum list",
"hammer content-view version list",
"hammer content-view version incremental-update --content-view-version-id 319 --errata-ids 34068b",
"hammer host errata list --host client.example.com",
"hammer erratum info --id ERRATUM_ID",
"dnf upgrade Module_Stream_Name",
"hammer host errata list --host client.example.com",
"hammer erratum info --id ERRATUM_ID",
"dnf upgrade Module_Stream_Name",
"hammer host errata list --host client.example.com",
"hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID1 , ERRATUM_ID2 --search-query \"name = client.example.com\"",
"hammer erratum list --errata-restrict-installable true --organization \" Default Organization \"",
"hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID --search-query \"applicable_errata = ERRATUM_ID \"",
"for HOST in hammer --csv --csv-separator \"|\" host list --search \"applicable_errata = ERRATUM_ID\" --organization \"Default Organization\" | tail -n+2 | awk -F \"|\" '{ print USD2 }' ; do echo \"== Applying to USDHOST ==\" ; hammer host errata apply --host USDHOST --errata-ids ERRATUM_ID1,ERRATUM_ID2 ; done",
"hammer task list",
"hammer task progress --id task_ID",
"hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID1 , ERRATUM_ID2 ,... --search-query \"host_collection = HOST_COLLECTION_NAME \"",
"hammer product create --description \" My_Description \" --name \"Red Hat Container Catalog\" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan \"",
"hammer repository create --content-type \"docker\" --docker-upstream-name \"rhel7\" --name \"RHEL7\" --organization \" My_Organization \" --product \"Red Hat Container Catalog\" --url \"http://registry.access.redhat.com/\"",
"hammer repository synchronize --name \"RHEL7\" --organization \" My_Organization \" --product \"Red Hat Container Catalog\"",
"<%= repository.docker_upstream_name %>",
"mkdir -p /etc/containers/certs.d/hostname.example.com",
"mkdir -p /etc/docker/certs.d/hostname.example.com",
"cp rootCA.pem /etc/containers/certs.d/hostname.example.com/ca.crt",
"cp rootCA.pem /etc/docker/certs.d/hostname.example.com/ca.crt",
"podman login hostname.example.com Username: admin Password: Login Succeeded!",
"podman login satellite.example.com",
"podman search satellite.example.com/",
"podman pull satellite.example.com/my-image:<optional_tag>",
"podman push My_Container_Image_Hash satellite.example.com / My_Organization_Label / My_Product_Label / My_Repository_Name [:_My_Tag_] podman push My_Container_Image_Hash satellite.example.com /id/ My_Organization_ID / My_Product_ID / My_Repository_Name [:_My_Tag_]",
"hammer repository-set list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \" | grep \"file\"",
"hammer repository-set enable --product \"Red Hat Enterprise Linux Server\" --name \"Red Hat Enterprise Linux 7 Server (ISOs)\" --releasever 7.2 --basearch x86_64 --organization \" My_Organization \"",
"hammer repository list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"",
"hammer repository synchronize --name \"Red Hat Enterprise Linux 7 Server ISOs x86_64 7.2\" --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"",
"hammer product create --name \" My_ISOs \" --sync-plan \"Example Plan\" --description \" My_Product \" --organization \" My_Organization \"",
"hammer repository create --name \" My_ISOs \" --content-type \"file\" --product \" My_Product \" --organization \" My_Organization \"",
"hammer repository upload-content --path ~/bootdisk.iso --id repo_ID --organization \" My_Organization \"",
"--- collections: - name: my_namespace.my_collection version: 1.2.3",
"subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=satellite-utils-6.16-for-rhel-8-x86_64-rpms",
"dnf module enable satellite-utils",
"satellite-maintain packages install pulp-manifest",
"satellite-maintain packages unlock satellite-maintain packages install pulp-manifest satellite-maintain packages lock",
"mkdir -p /var/lib/pulp/ local_repos / my_file_repo",
"satellite-installer --foreman-proxy-content-pulpcore-additional-import-paths /var/lib/pulp/ local_repos",
"touch /var/lib/pulp/ local_repos / my_file_repo / test.txt",
"pulp-manifest /var/lib/pulp/ local_repos / my_file_repo",
"ls /var/lib/pulp/ local_repos / my_file_repo PULP_MANIFEST test.txt",
"subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=satellite-utils-6.16-for-rhel-8-x86_64-rpms",
"dnf module enable satellite-utils",
"dnf install pulp-manifest",
"mkdir /var/www/html/pub/ my_file_repo",
"touch /var/www/html/pub/ my_file_repo / test.txt",
"pulp-manifest /var/www/html/pub/ my_file_repo",
"ls /var/www/html/pub/ my_file_repo PULP_MANIFEST test.txt",
"hammer product create --description \" My_Files \" --name \" My_File_Product \" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan \"",
"hammer repository create --content-type \"file\" --name \" My_Files \" --organization \" My_Organization \" --product \" My_File_Product \"",
"hammer repository upload-content --id repo_ID --organization \" My_Organization \" --path example_file",
"curl --cacert ./_katello-server-ca.crt --cert ./_My_Organization_key-cert.pem --remote-name https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File",
"curl --remote-name http:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File",
"hammer repository list --content-type file ---|------------|-------------------|--------------|---- ID | NAME | PRODUCT | CONTENT TYPE | URL ---|------------|-------------------|--------------|---- 7 | My_Files | My_File_Product | file | ---|------------|-------------------|--------------|----",
"hammer repository info --name \" My_Files \" --organization-id My_Organization_ID --product \" My_File_Product \"",
"Publish Via HTTP: yes Published At: https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_File_Product_Label / My_Files_Label /",
"Publish Via HTTP: no Published At: https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_File_Product_Label / My_Files_Label /",
"curl --cacert ./_katello-server-ca.crt --cert ./_My_Organization_key-cert.pem --remote-name https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File",
"curl --remote-name http:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File",
"satellite-maintain service stop",
"satellite-maintain packages install nfs-utils",
"mkdir /mnt/temp mount -o rw nfs.example.com:/Satellite/pulp /mnt/temp",
"cp -r /var/lib/pulp/* /mnt/temp/.",
"umount /mnt/temp",
"rm -rf /var/lib/pulp/*",
"nfs.example.com:/Satellite/pulp /var/lib/pulp nfs rw,hard,intr,context=\"system_u:object_r:pulpcore_var_lib_t:s0\"",
"mount -a",
"df Filesystem 1K-blocks Used Available Use% Mounted on nfs.example.com:/Satellite/pulp 309506048 58632800 235128224 20% /var/lib/pulp",
"ls /var/lib/pulp",
"satellite-maintain service start"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html-single/managing_content/index |
3.9. The SET Statement | 3.9. The SET Statement Execution properties are set on the connection using the SET statement. The SET statement is not yet a language feature of JBoss Data Virtualization and is handled only in the JDBC client. SET Syntax: SET [PAYLOAD] (parameter|SESSION AUTHORIZATION) value SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL (READ UNCOMMITTED|READ COMMITTED|REPEATABLE READ|SERIALIZABLE) Syntax Rules: The parameter must be an identifier. If quoted, it can contain spaces and other special characters, but otherwise it can not. The value may be either a non-quoted identifier or a quoted string literal value. If payload is specified, for example, SET PAYLOAD x y , then a session scoped payload properties object will have the corresponding name value pair set. The payload object is not fully session scoped. It will be removed from the session when the XAConnection handle is closed/returned to the pool (assumes the use of TeiidDataSource). The session scoped payload is superseded by usage of the TeiidStatement.setPayload . Using SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL is equivalent to calling Connection.setTransactionIsolation with the corresponding level. The SET statement is most commonly used to control planning and execution. SET SHOWPLAN (ON|DEBUG|OFF) SET NOEXEC (ON|OFF) The following is an example of how to use the SET statement to enable a debug plan: The SET statement may also be used to control authorization. A SET SESSION AUTHORIZATION statement will perform a reauthentication (see Section 2.6, "Reauthentication" ) given the credentials currently set on the connection. The connection credentials may be changed by issuing a SET PASSWORD statement. | [
"Statement s = connection.createStatement(); s.execute(\"SET SHOWPLAN DEBUG\"); Statement s1 = connection.createStatement(); ResultSet rs = s1.executeQuery(\"select col from table\"); ResultSet planRs = s1.executeQuery(\"SHOW PLAN\"); planRs.next(); String debugLog = planRs.getString(\"DEBUG_LOG\"); Query Plan without executing the query s.execute(\"SET NOEXEC ON\"); s.execute(\"SET SHOWPLAN DEBUG\"); e.execute(\"SET NOEXEC OFF\");",
"Statement s = connection.createStatement(); s.execute(\"SET PASSWORD 'someval'\"); s.execute(\"SET SESSION AUTHORIZATION 'newuser'\");"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/The_SET_Statement1 |
Chapter 70. Timer Source | Chapter 70. Timer Source Produces periodic events with a custom payload. 70.1. Configuration Options The following table summarizes the configuration options available for the timer-source Kamelet: Property Name Description Type Default Example message * Message The message to generate string "hello world" contentType Content Type The content type of the message being generated string "text/plain" period Period The interval between two events in milliseconds integer 1000 repeatCount Repeat Count Specifies the maximum limit of the number of fires integer Note Fields marked with an asterisk (*) are mandatory. 70.2. Dependencies At runtime, the timer-source Kamelet relies upon the presence of the following dependencies: camel:core camel:timer camel:kamelet 70.3. Usage This section describes how you can use the timer-source . 70.3.1. Knative Source You can use the timer-source Kamelet as a Knative source by binding it to a Knative object. timer-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: timer-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "hello world" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 70.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 70.3.1.2. Procedure for using the cluster CLI Save the timer-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f timer-source-binding.yaml 70.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind timer-source -p "source.message=hello world" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 70.3.2. Kafka Source You can use the timer-source Kamelet as a Kafka source by binding it to a Kafka topic. timer-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: timer-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "hello world" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 70.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 70.3.2.2. Procedure for using the cluster CLI Save the timer-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f timer-source-binding.yaml 70.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind timer-source -p "source.message=hello world" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 70.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/timer-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: timer-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"hello world\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f timer-source-binding.yaml",
"kamel bind timer-source -p \"source.message=hello world\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: timer-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"hello world\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f timer-source-binding.yaml",
"kamel bind timer-source -p \"source.message=hello world\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/timer-source |
Chapter 29. Using Red Hat JBoss Data Grid with Microsoft Azure | Chapter 29. Using Red Hat JBoss Data Grid with Microsoft Azure While JBoss Data Grid is fully supported on Microsoft Azure, the default discovery mechanism, UDP multicasting, is not supported in this environment. If the cluster needs to dynamically add or remove members one of the following protocols should be used: JDBC_PING - Discovery protocol that utilizes a shared database. TCPGOSSIP - Discovery protocol that utilizes shared GossipRouter processes. Alternatively, if there is a fixed set of nodes in the cluster then TCPPING may be used instead. An example configuration using TCPPING is found in Section 26.1.3, "Configure JGroups Socket Binding" . Report a bug 29.1. The JDBC_PING JGroups Protocol The JDBC_PING discovery protocol uses a shared database to store information about the nodes in the cluster. With Microsoft Azure this should be a SQL database, and once the database has been created the JDBC connection URL may be performing the following steps: Select the properties of the SQL Database to be used. Navigate to the Connection Properties section and click the Show database connection strings link Click JDBC Connection URL . This value will be used for the connection_url property when configuring JDBC_PING . Once the connection_url has been retrieved, the SQL Driver may be obtained from Microsoft JDBC Drivers for SQL Server . This driver will be used in the following steps. JDBC_PING Parameters The following parameters must be defined, and are used to connect to the SQL database: connection_url - the JDBC Connection URL obtained in the above steps. connection_username - The database username, obtained from the connection_url . connection_password - The database password, obtained from the connection_url . connection_driver - The driver used to connect to the database. The application or server must include the JDBC driver JAR file on the classpath. Configuring JBoss Data Grid to use JDBC_PING (Library Mode) In Library Mode the JGroups xml file should be used to configure JDBC_PING ; however, there is no JDBC_PING configuration included by default. It is recommended to use one of the preexisting files specified in Section 26.2.2, "Pre-Configured JGroups Files" and then adjust the configuration to include JDBC_PING . For instance, default-configs/default-jgroups-ec2.xml could be selected and the S3_PING protocol removed, and then the following block added in its place: Configuring JBoss Data Grid to use JDBC_PING (Remote Client-Server Mode) In Remote Client-Server Mode a stack may be defined for JDBC_PING in the jgroups subsystem of the server's configuration file. The following configuration snippet contains an example of this: Report a bug | [
"<JDBC_PING connection_url=\"USD{jboss.jgroups.jdbc_ping.connection_url:}\" connection_username=\"USD{jboss.jgroups.jdbc_ping.connection_username:}\" connection_password=\"USD{jboss.jgroups.jdbc_ping.connection_password:}\" connection_driver=\"com.microsoft.sqlserver.jdbc.SQLServerDriver\" />",
"<subsystem xmlns=\"urn:infinispan:server:jgroups:6.1\" default-stack=\"USD{jboss.default.jgroups.stack:jdbc_ping}\"> [...] <stack name=\"jdbc_ping\"> <transport type=\"TCP\" socket-binding=\"jgroups-tcp\"/> <protocol type=\"JDBC_PING\"> <property name=\"connection_url\"> USD{jboss.jgroups.jdbc_ping.connection_url:} </property> <property name=\"connection_username\">USD{jboss.jgroups.jdbc_ping.connection_username:}</property> <property name=\"connection_password\">USD{jboss.jgroups.jdbc_ping.connection_password:}</property> <property name=\"connection_driver\">com.microsoft.sqlserver.jdbc.SQLServerDriver</property> </protocol> <protocol type=\"MERGE3\"/> <protocol type=\"FD_SOCK\" socket-binding=\"jgroups-tcp-fd\"/> <protocol type=\"FD_ALL\"/> <protocol type=\"VERIFY_SUSPECT\"/> <protocol type=\"pbcast.NAKACK2\"> <property name=\"use_mcast_xmit\">false</property> </protocol> <protocol type=\"UNICAST3\"/> <protocol type=\"pbcast.STABLE\"/> <protocol type=\"pbcast.GMS\"/> <protocol type=\"MFC\"/> <protocol type=\"FRAG2\"/> </stack> [...] </subsystem>"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/using_red_hat_jboss_data_grid_with_microsoft_azure |
Preface | Preface Date of release: 2025-02-24 | null | https://docs.redhat.com/en/documentation/red_hat_build_of_node.js/22/html/release_notes_for_node.js_22/pr01 |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_content_navigator_creator_guide/providing-feedback |
probe::scsi.ioentry | probe::scsi.ioentry Name probe::scsi.ioentry - Prepares a SCSI mid-layer request Synopsis scsi.ioentry Values req_addr The current struct request pointer, as a number disk_major The major number of the disk (-1 if no information) device_state_str The current state of the device, as a string disk_minor The minor number of the disk (-1 if no information) device_state The current state of the device | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-scsi-ioentry |
Using the Streams for Apache Kafka Console | Using the Streams for Apache Kafka Console Red Hat Streams for Apache Kafka 2.7 The Streams for Apache Kafka console supports your deployment of Streams for Apache Kafka. | [
"SESSION_SECRET=USD(LC_CTYPE=C openssl rand -base64 32) echo \"Generated SESSION_SECRET: USDSESSION_SECRET\" NEXTAUTH_SECRET=USD(LC_CTYPE=C openssl rand -base64 32) echo \"Generated NEXTAUTH_SECRET: USDNEXTAUTH_SECRET\"",
"sed -i 's/USD{NAMESPACE}/'\"my-project\"'/g' <resource_path>/console-prometheus-server.clusterrolebinding.yaml",
"Prometheus security resources apply -n my-project -f <resource_path>/prometheus/console-prometheus-server.clusterrole.yaml apply -n my-project -f <resource_path>/prometheus/console-prometheus-server.serviceaccount.yaml apply -n my-project -f <resource_path>/prometheus/console-prometheus-server.clusterrolebinding.yaml Prometheus PodMonitor and Kubernetes scrape configurations apply -n my-project -f <resource_path>/prometheus/kafka-resources.podmonitor.yaml apply -n my-project -f <resource_path>/prometheus/kubernetes-scrape-configs.secret.yaml Prometheus instance apply -n my-project -f <resource_path>/prometheus/console-prometheus.prometheus.yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: console-kafka-user1 labels: strimzi.io/cluster: console-kafka spec: authentication: type: scram-sha-512 authorization: type: simple acls: - resource: type: cluster name: \"\" patternType: literal operations: - Describe - DescribeConfigs - resource: type: topic name: \"*\" patternType: literal operations: - Read - Describe - DescribeConfigs - resource: type: group name: \"*\" patternType: literal operations: - Read - Describe",
"sed -i 's/type: USD{LISTENER_TYPE}/type: route/g' console-kafka.kafka.yaml sed -i 's/\\USD{CLUSTER_DOMAIN}/'\"<my_router_base_domain>\"'/g' console-kafka.kafka.yaml",
"Metrics configuration apply -n my-project -f <resource_path>/console-kafka-metrics.configmap.yaml Create the cluster apply -n my-project -f <resource_path>/console-kafka.kafka.yaml Create a user for the cluster apply -n my-project -f <resource_path>/console-kafka-user1.kafkauser.yaml",
"apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: kafka-resources labels: app: console-kafka-monitor spec: namespaceSelector: matchNames: - <kafka_namespace> #",
"get pods -n <my_console_namespace>",
"NAME READY STATUS RESTARTS strimzi-cluster-operator 1/1 Running 0 console-kafka-kafka-0 1/1 Running 0 console-kafka-kafka-1 1/1 Running 0 console-kafka-kafka-2 1/1 Running 0 prometheus-operator-... 1/1 Running 0 prometheus-console-prometheus 1/1 Running 0",
"sed -i 's/USD{NAMESPACE}/'\"my-project\"'/g' /<resource_path>console-server.clusterrolebinding.yaml",
"Console security resources apply -n my-project -f <resource_path>/console-server.clusterrole.yaml apply -n my-project -f <resource_path>/console-server.serviceaccount.yaml apply -n my-project -f <resource_path>/console-server.clusterrolebinding.yaml Console user interface service apply -n my-project -f <resource_path>/console-ui.service.yaml Console route apply -n my-project -f <resource_path>/console-ui.route.yaml",
"create secret generic console-ui-secrets -n my-project --from-literal=SESSION_SECRET=\"<session_secret_value>\" --from-literal=NEXTAUTH_SECRET=\"<next_secret_value>\"",
"get route console-ui-route -n my-project -o jsonpath='{.spec.host}'",
"sed -i 's/USD{CONSOLE_HOSTNAME}/'\"<route_hostname>\"'/g' console.deployment.yaml sed -i 's/USD{NAMESPACE}/'\"my-project\"'/g' console.deployment.yaml",
"apply -n my-project -f <resource_path>/console.deployment.yaml",
"NAME READY STATUS RESTARTS strimzi-cluster-operator 1/1 Running 0 console-kafka-kafka-0 1/1 Running 0 console-kafka-kafka-0 1/1 Running 0 console-kafka-kafka-0 1/1 Running 0 prometheus-operator-... 1/1 Running 0 prometheus-console-prometheus 1/1 Running 0 console-... 2/2 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: console-kafka namespace: my-project spec: entityOperator: topicOperator: {} userOperator: {} kafka: authorization: type: simple config: allow.everyone.if.no.acl.found: 'true' default.replication.factor: 3 inter.broker.protocol.version: '3.6' min.insync.replicas: 2 offsets.topic.replication.factor: 3 transaction.state.log.min.isr: 2 transaction.state.log.replication.factor: 3 listeners: 1 - name: route1 port: 9094 tls: true type: route authentication: type: scram-sha-512 replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 10Gi deleteClaim: false metricsConfig: 2 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: console-cluster-metrics key: kafka-metrics-config.yml version: 3.6.0 zookeeper: replicas: 3 storage: deleteClaim: false size: 10Gi type: persistent-claim metricsConfig: 3 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: console-cluster-metrics key: zookeeper-metrics-config.yml",
"apiVersion: apps/v1 kind: Deployment metadata: name: console spec: replicas: 1 # template: metadata: labels: app: console spec: # containers: - name: console-api # env: - name: KAFKA_SECURITY_PROTOCOL 1 value: SASL_SSL - name: KAFKA_SASL_MECHANISM 2 value: SCRAM-SHA-512 - name: CONSOLE_KAFKA_CLUSTER1 3 value: my-project/console-kafka - name: CONSOLE_KAFKA_CLUSTER1_BOOTSTRAP_SERVERS 4 value: console-kafka-route1-bootstrap-my-project.router.com:443 - name: CONSOLE_KAFKA_CLUSTER1_SASL_JAAS_CONFIG 5 valueFrom: secretKeyRef: name: console-kafka-user1 key: sasl.jaas.config - name: console-ui # env: - name: NEXTAUTH_SECRET 6 valueFrom: secretKeyRef: name: console-ui-secrets key: NEXTAUTH_SECRET - name: SESSION_SECRET 7 valueFrom: secretKeyRef: name: console-ui-secrets key: SESSION_SECRET - name: NEXTAUTH_URL 8 value: 'https://console-ui-route-my-project.router.com' - name: BACKEND_URL 9 value: 'http://127.0.0.1:8080' - name: CONSOLE_METRICS_PROMETHEUS_URL 10 value: 'http://prometheus-operated.my-project.svc.cluster.local:9090'",
"dnf install <package_name>",
"dnf install <path_to_download_package>"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html-single/using_the_streams_for_apache_kafka_console/index |
Chapter 5. 4.2 Release Notes | Chapter 5. 4.2 Release Notes 5.1. New Features This following major enhancements have been introduced in Red Hat Update Infrastructure 4.2. Importing package group metadata to a custom repository is now supported With this update, data from a comps file can now be imported and included into the metadata for a custom repository. For more information, see Importing package group metadata to a custom repository . Faster package uploads Previously, rhui-manager uploaded a single package at a time and republished the custom repository after each upload. With this update, rhui-manager creates a temporary repository instead, which contains all the packages that have to be uploaded. Additionally, rhui-manager also creates the repodata for this temporary repository. To upload the packages to the custom repository, rhui-manager synchronizes the custom repository with the temporary one. As a result, package uploads are now significantly faster. User supplied HAProxy configuration file is now supported With this update, you can specify a custom template for an HAProxy configuration when adding an HAProxy load balancer to your RHUI instance. If you do not specify a custom file, the default template is used. The default template is stored in the /usr/share/rhui-tools/templates/haproxy.cfg file. New rhui-manager aliases With this update, aliases for the --username and --password options for the rhui-manager command are now supported in the command-line interface. You can now use the short forms -u and -p respectively. gunicorn services now restart automatically With this update, gunicorn services restart automatically if they exit unexpectedly. As a result, CDS nodes can now recover from situations where these services stop unexpectedly, for example, when the system runs out of memory. Upgraded rhui-manager utility With this update, the following new option and flag are now available in rhui-manager : repo add_by_file option to add repositories using a YAML input file. For more information, see Adding a new Red Hat content repository using an input file . --sync_now flag to sync any repositories that are added. This flag is now available with the rhui-manager repo add_by_repo and rhui-manager repo add_by_file commands. 5.2. Bug Fixes The following bugs have been fixed in Red Hat Update Infrastructure 4.2 that have a significant impact on users. Migrating to a different remote file server now works as expected Previously, when rhui-installer was rerun with a remote file server, which was different from the one that was originally used, rhui-installer failed to unmount the original file server. Additionally, rhui-installer also failed to remove the record of the file server from the /etc/fstab file. Consequently, migrating to another remote file server failed. With this update, rhui-installer successfully unmounts the remote file server and also removes its record. As a result, migrating to a different remote file server now works as expected. 5.3. Known Issues This part describes known issues in Red Hat Update Infrastructure 4.2. Repository synchronization tasks fail when updating Red Hat Update Infrastructure When you update RHUI, Pulp worker processes are shut down and restarted once the update is complete. Consequently, any repository synchronization tasks that are scheduled during the update might be aborted with the following error: As a workaround, aborted repository synchronization tasks are automatically rerun at the available time slot. The default is 6 hours. Alternatively, you can manually synchronize repositories after the update is complete. rhui-installer ignores custom RHUI CA when updating to a newer version of RHUI When updating from RHUI version 4.1.0 or older, rhui-installer ignores the existing custom RHUI CA and generates a new RHUI CA regardless of the parameter value set in the answer.yml file. Consequently, RHUI fails to recognize clients which use the older RHUI CA. To work around this problem, specify the custom RHUI CA when running the rhui-installer --rerun command. For more information, see Updating Red Hat Update Infrastructure . rhui-manager repo unused command fails to list newly added repositories Currently, the rhui-manager repo unused command displays all the repositories that are listed in the entitlement certificate but are not used by RHUI. However, when new repositories are added to the certificate, they might not be immediately available to the RHUA node due to a previously active denylist. Consequently, they are not displayed when you run the command to list the unused repositories. To work around this problem, you must remove the repository cache. You can do so by running the following command: | [
"Aborted during worker shutdown",
"rhui-installer --rerun --user-supplied-rhui-ca-crt <custom_RHUI_CA.crt> --user-supplied-rhui-ca-key <custom_RHUI_CA_key>",
"rm -f /var/cache/rhui/ *"
]
| https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/release_notes/assembly_4-2-release-notes_release-notes |
Managing and allocating storage resources | Managing and allocating storage resources Red Hat OpenShift Data Foundation 4.18 Instructions on how to allocate storage to core services and hosted applications in OpenShift Data Foundation, including snapshot and clone. Red Hat Storage Documentation Team Abstract This document explains how to allocate storage to core services and hosted applications in Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Overview Read this document to understand how to create, configure, and allocate storage to core services or hosted applications in Red Hat OpenShift Data Foundation. Chapter 2, Storage classes shows you how to create custom storage classes. Chapter 5, Block pools provides you with information on how to create, update and delete block pools. Chapter 6, Configure storage for OpenShift Container Platform services shows you how to use OpenShift Data Foundation for core OpenShift Container Platform services. Chapter 8, Backing OpenShift Container Platform applications with OpenShift Data Foundation provides information about how to configure OpenShift Container Platform applications to use OpenShift Data Foundation. Adding file and object storage to an existing external OpenShift Data Foundation cluster Chapter 10, How to use dedicated worker nodes for Red Hat OpenShift Data Foundation provides information about how to use dedicated worker nodes for Red Hat OpenShift Data Foundation. Chapter 11, Managing Persistent Volume Claims provides information about managing Persistent Volume Claim requests, and automating the fulfillment of those requests. Chapter 12, Reclaiming space on target volumes shows you how to reclaim the actual available storage space. Chapter 14, Volume Snapshots shows you how to create, restore, and delete volume snapshots. Chapter 15, Volume cloning shows you how to create volume clones. Chapter 16, Managing container storage interface (CSI) component placements provides information about setting tolerations to bring up container storage interface component on the nodes. Chapter 2. Storage classes The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create custom storage classes to use other storage resources or to offer a different behavior to applications. Note Custom storage classes are not supported for external mode OpenShift Data Foundation clusters. 2.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes. Choose a Storage system for your workloads. Select an existing Storage Pool from the list or create a new pool. Note The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article . Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 2.2. Storage class with single replica You can create a storage class with a single replica to be used by your applications. This avoids redundant data copies and allows resiliency management on the application level. Warning Enabling this feature creates a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability if your application does not have its own replication. If any OSDs are lost, this feature requires very disruptive steps to recover. All applications can lose their data, and must be recreated in case of a failed OSD. Procedure Enable the single replica feature using the following command: Verify storagecluster is in Ready state: Example output: New cephblockpools are created for each failure domain. Verify cephblockpools are in Ready state: Example output: Verify new storage classes have been created: Example output: New OSD pods are created; 3 osd-prepare pods and 3 additional pods. Verify new OSD pods are in Running state: Example output: 2.2.1. Recovering after OSD lost from single replica When using replica 1, a storage class with a single replica, data loss is guaranteed when an OSD is lost. Lost OSDs go into a failing state. Use the following steps to recover after OSD loss. Procedure Follow these recovery steps to get your applications running again after data loss from replica 1. You first need to identify the domain where the failing OSD is. If you know which failure domain the failing OSD is in, run the following command to get the exact replica1-pool-name required for the steps. If you do not know where the failing OSD is, skip to step 2. Example output: Copy the corresponding failure domain name for use in steps, then skip to step 4. Find the OSD pod that is in Error state or CrashLoopBackoff state to find the failing OSD: Identify the replica-1 pool that had the failed OSD. Identify the node where the failed OSD was running: Identify the failureDomainLabel for the node where the failed OSD was running: The output shows the replica-1 pool name whose OSD is failing, for example: where USDfailure_domain_value is the failureDomainName. Delete the replica-1 pool. Connect to the toolbox pod: Delete the replica-1 pool. Note that you have to enter the replica-1 pool name twice in the command, for example: Replace replica1-pool-name with the failure domain name identified earlier. Purge the failing OSD by following the steps in section "Replacing operational or failed storage devices" based on your platform in the Replacing devices guide. Restart the rook-ceph operator: Recreate any affected applications in that avaialbity zone to start using the new pool with same name. Chapter 3. Persistent volume encryption Persistent volume (PV) encryption guarantees isolation and confidentiality between tenants (applications). Before you can use PV encryption, you must create a storage class for PV encryption. Persistent volume encryption is only available for RBD PVs. OpenShift Data Foundation supports storing encryption passphrases in HashiCorp Vault and Thales CipherTrust Manager. You can create an encryption enabled storage class using an external key management system (KMS) for persistent volume encryption. You need to configure access to the KMS before creating the storage class. Note For PV encryption, you must have a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . 3.1. Access configuration for Key Management System (KMS) Based on your use case, you need to configure access to KMS using one of the following ways: Using vaulttokens : allows users to authenticate using a token Using Thales CipherTrust Manager : uses Key Management Interoperability Protocol (KMIP) Using vaulttenantsa (Technology Preview): allows users to use serviceaccounts to authenticate with Vault Important Accessing the KMS using vaulttenantsa is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 3.1.1. Configuring access to KMS using vaulttokens Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Procedure Create a secret in the tenant's namespace. In the OpenShift Container Platform web console, navigate to Workloads -> Secrets . Click Create -> Key/value secret . Enter Secret Name as ceph-csi-kms-token . Enter Key as token . Enter Value . It is the token from Vault. You can either click Browse to select and upload the file containing the token or enter the token directly in the text box. Click Create . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. 3.1.2. Configuring access to KMS using Thales CipherTrust Manager Prerequisites Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token be navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both meta-data and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Procedure To create a key to act as the Key Encryption Key (KEK) for storageclass encryption, follow the steps below: Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. 3.1.3. Configuring access to KMS using vaulttenantsa Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Create the following serviceaccount in the tenant namespace as shown below: Procedure You need to configure the Kubernetes authentication method before OpenShift Data Foundation can authenticate with and start using Vault . The following instructions create and configure serviceAccount , ClusterRole , and ClusterRoleBinding required to allow OpenShift Data Foundation to authenticate with Vault . Apply the following YAML to your Openshift cluster: Create a secret for serviceaccount token and CA certificate. Get the token and the CA certificate from the secret. Retrieve the OpenShift cluster endpoint. Use the information collected in the steps to set up the kubernetes authentication method in Vault as shown: Create a role in Vault for the tenant namespace: csi-kubernetes is the default role name that OpenShift Data Foundation looks for in Vault. The default service account name in the tenant namespace in the OpenShift Data Foundation cluster is ceph-csi-vault-sa . These default values can be overridden by creating a ConfigMap in the tenant namespace. For more information about overriding the default names, see Overriding Vault connection details using tenant ConfigMap . Sample YAML To create a storageclass that uses the vaulttenantsa method for PV encryption, you must either edit the existing ConfigMap or create a ConfigMap named csi-kms-connection-details that will hold all the information needed to establish the connection with Vault. The sample yaml given below can be used to update or create the csi-kms-connection-detail ConfigMap: encryptionKMSType Set to vaulttenantsa to use service accounts for authentication with vault. vaultAddress The hostname or IP address of the vault server with the port number. vaultTLSServerName (Optional) The vault TLS server name vaultAuthPath (Optional) The path where kubernetes auth method is enabled in Vault. The default path is kubernetes . If the auth method is enabled in a different path other than kubernetes , this variable needs to be set as "/v1/auth/<path>/login" . vaultAuthNamespace (Optional) The Vault namespace where kubernetes auth method is enabled. vaultNamespace (Optional) The Vault namespace where the backend path being used to store the keys exists vaultBackendPath The backend path in Vault where the encryption keys will be stored vaultCAFromSecret The secret in the OpenShift Data Foundation cluster containing the CA certificate from Vault vaultClientCertFromSecret The secret in the OpenShift Data Foundation cluster containing the client certificate from Vault vaultClientCertKeyFromSecret The secret in the OpenShift Data Foundation cluster containing the client private key from Vault tenantSAName (Optional) The service account name in the tenant namespace. The default value is ceph-csi-vault-sa . If a different name is to be used, this variable has to be set accordingly. 3.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Using Thales CipherTrust Manager (using KMIP): Ensure to configure access as described in Configuring access to KMS using Thales CipherTrust Manager (For users on Azure platform only) Using Azure Vault: Ensure to set up client authentication and fetch the client credentials from Azure using the following steps: Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation. Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation. Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault . Procedure In the OpenShift Web Console, navigate to Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. Choose one of the following options to set the KMS connection details: Choose existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Select the Provider from the drop down. Select the Key service for the given provider from the list. Create new KMS connection : This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only. Select one of the following Key Management Service Provider and provide the required details. Vault Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name . In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address : 123.34.3.2, Port : 5696. Upload the Client Certificate , CA certificate , and Client Private Key . Enter the Unique Identifier for the key to be used for encryption and decryption, generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Azure Key Vault (Only for Azure users on Azure platform) For information about setting up client authentication and fetching the client credentials, see the Prerequisites in Creating an OpenShift Data Foundation cluster section of the Deploying OpenShift Data Foundation using Microsoft Azure guide. Enter a unique Connection name for the key management service within the project. Enter Azure Vault URL . Enter Client ID . Enter Tenant ID . Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key. Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage -> Storage Classes . Click the Storage class name -> YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads -> ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) -> Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . 3.2.1. Overriding Vault connection details using tenant ConfigMap The Vault connections details can be reconfigured per tenant by creating a ConfigMap in the Openshift namespace with configuration options that differ from the values set in the csi-kms-connection-details ConfigMap in the openshift-storage namespace. The ConfigMap needs to be located in the tenant namespace. The values in the ConfigMap in the tenant namespace will override the values set in the csi-kms-connection-details ConfigMap for the encrypted Persistent Volumes created in that namespace. Procedure Ensure that you are in the tenant namespace. Click on Workloads -> ConfigMaps . Click on Create ConfigMap . The following is a sample yaml. The values to be overidden for the given tenant namespace can be specified under the data section as shown below: After the yaml is edited, click on Create . 3.3. Enabling and disabling key rotation when using KMS Security common practices require periodic encryption of key rotation. You can enable or disable key rotation when using KMS. 3.3.1. Enabling key rotation To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to PersistentVolumeClaims , Namespace , or StorageClass (in the decreasing order of precedence). <value> can be @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 3.3.2. Disabling key rotation You can disable key rotation for the following: All the persistent volume claims (PVCs) of storage class A specific PVC Disabling key rotation for all PVCs of a storage class To disable key rotation for all PVCs, update the annotation of the storage class: Disabling key rotation for a specific persistent volume claim Identify the EncryptionKeyRotationCronJob CR for the PVC you want to disable key rotation on: Where <PVC_NAME> is the name of the PVC that you want to disable. Apply the following to the EncryptionKeyRotationCronJob CR from the step to disable the key rotation: Update the csiaddons.openshift.io/state annotation from managed to unmanaged : Where <encryptionkeyrotationcronjob_name> is the name of the EncryptionKeyRotationCronJob CR. Add suspend: true under the spec field: Save and exit. The key rotation will be disabled for the PVC. Chapter 4. Enabling and disabling encryption in-transit post deployment You can enable encryption in-transit for the existing clusters after the deployment of clusters both in internal and external modes. 4.1. Enabling encryption in-transit after deployment in internal mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Procedure Patch the storagecluster to add encryption enabled as true to the storage cluster spec: Check the configurations. Wait for around 10 minutes for ceph daemons to restart and then check the pods. Remount existing volumes. Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume." 4.2. Disabling encryption in-transit after deployment in internal mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Encryption in-transit is enabled. Procedure Patch the storagecluster to update encryption enabled as false in the storage cluster spec: Check the configurations. Wait for around 10 minutes for ceph daemons to restart and then check the pods. Remount existing volumes. Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume." 4.3. Enabling encryption in-transit after deployment in external mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Procedure Patch the storagecluster to add encryption enabled as true the storage cluster spec: Check the connection settings in the CR. 4.3.1. Applying encryption in-transit on Red Hat Ceph Storage cluster Procedure Apply Encryption in-transit settings. Check the settings. Restart all Ceph daemons. Wait for the restarting of all the daemons. 4.3.2. Remount existing volumes. Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume. 4.4. Disabling encryption in-transit after deployment in external mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Encryption in-transit is enabled for the external mode cluster. Procedure Removing encryption in-transit settings from Red Hat Ceph Storage cluster Remove and check encryption in-transit configurations. Restart all Ceph daemons. Patching the CR Patch the storagecluster to update encryption enabled as false in the storage cluster spec: Check the configurations. Remount existing volumes Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume. Chapter 5. Block pools The OpenShift Data Foundation operator installs a default set of storage pools depending on the platform in use. These default storage pools are owned and controlled by the operator and it cannot be deleted or modified. Note Multiple block pools are not supported for external mode OpenShift Data Foundation clusters. 5.1. Managing block pools in internal mode With OpenShift Container Platform, you can create multiple custom storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. 5.1.1. Creating a block pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click the Storage pools tab. Click Create storage pool . Select Volume type as Block . Enter Pool name . Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Select Data protection policy as either 2-way Replication or 3-way Replication . Optional: Select Enable compression checkbox if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression is not compressed. Click Create . 5.1.2. Updating an existing pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click Storage pools . Click the Action Menu (...) at the end the pool you want to update. Click Edit storage pool . Modify the form details as follows: Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Change the Data protection policy to either 2-way Replication or 3-way Replication. Enable or disable the compression option. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression is not compressed. Click Save . 5.1.3. Deleting a pool Use this procedure to delete a pool in OpenShift Data Foundation. Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure . Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click the Storage pools tab. Click the Action Menu (...) at the end the pool you want to delete. Click Delete Storage Pool . Click Delete to confirm the removal of the Pool. Note A pool cannot be deleted when it is bound to a PVC. You must detach all the resources before performing this activity. Note When a pool is deleted, the underlying Ceph pool is not deleted. Chapter 6. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as the following: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have a plenty of storage capacity for the following OpenShift services that you configure: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) OpenShift tracing platform (Tempo) If the storage for these critical services runs out of space, the OpenShift cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 6.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the Container Image Registry. On AWS, it is not required to change the storage for the registry. However, it is recommended to change the storage to OpenShift Data Foundation Persistent Volume for vSphere and Bare metal platforms. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration -> Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) -> Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 6.2. Using Multicloud Object Gateway as OpenShift Image Registry backend storage You can use Multicloud Object Gateway (MCG) as OpenShift Container Platform (OCP) Image Registry backend storage in an on-prem OpenShift deployment. To configure MCG as a backend storage for the OCP image registry, follow the steps mentioned in the procedure. Prerequisites Administrative access to OCP Web Console. A running OpenShift Data Foundation cluster with MCG. Procedure Create ObjectBucketClaim by following the steps in Creating Object Bucket Claim . Create an image-registry-private-configuration-user secret. Go to the OpenShift web-console. Click ObjectBucketClaim --> ObjectBucketClaim Data . In the ObjectBucketClaim data , look for MCG access key and MCG secret key in the openshift-image-registry namespace . Create the secret using the following command: Change the status of managementState of Image Registry Operator to Managed . Edit the spec.storage section of Image Registry Operator configuration file: Get the unique-bucket-name and regionEndpoint under the Object Bucket Claim Data section from the Web Console OR you can also get the information on regionEndpoint and unique-bucket-name from the command: Add regionEndpoint as http://<Endpoint-name>:<port> if the storageclass is ceph-rgw storageclass and the endpoint points to the internal SVC from the openshift-storage namespace. An image-registry pod spawns after you make the changes to the Operator registry configuration file. Reset the image registry settings to default. Verification steps Run the following command to check if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output (Optional) You can also the run the following command to verify if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output 6.3. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads -> Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 6.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads -> Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 6.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 6.3. Persistent Volume Claims attached to prometheus-k8s-* pod 6.4. Overprovision level policy control Overprovision control is a mechanism that enables you to define a quota on the amount of Persistent Volume Claims (PVCs) consumed from a storage cluster, based on the specific application namespace. When you enable the overprovision control mechanism, it prevents you from overprovisioning the PVCs consumed from the storage cluster. OpenShift provides flexibility for defining constraints that limit the aggregated resource consumption at cluster scope with the help of ClusterResourceQuota . For more information see, OpenShift ClusterResourceQuota . With overprovision control, a ClusteResourceQuota is initiated, and you can set the storage capacity limit for each storage class. For more information about OpenShift Data Foundation deployment, refer to Product Documentation and select the deployment procedure according to the platform. Prerequisites Ensure that the OpenShift Data Foundation cluster is created. Procedure Deploy storagecluster either from the command line interface or the user interface. Label the application namespace. <desired_name> Specify a name for the application namespace, for example, quota-rbd . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Edit the storagecluster to set the quota limit on the storage class. <ocs_storagecluster_name> Specify the name of the storage cluster. Add an entry for Overprovision Control with the desired hard limit into the StorageCluster.Spec : <desired_quota_limit> Specify a desired quota limit for the storage class, for example, 27Ti . <storage_class_name> Specify the name of the storage class for which you want to set the quota limit, for example, ocs-storagecluster-ceph-rbd . <desired_quota_name> Specify a name for the storage quota, for example, quota1 . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Save the modified storagecluster . Verify that the clusterresourcequota is defined. Note Expect the clusterresourcequota with the quotaName that you defined in the step, for example, quota1 . 6.5. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 6.5.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 6.5.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration -> Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 6.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload -> Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide. Chapter 7. Creating Multus networks OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. You can configure your default pod network during cluster installation. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition (NAD) custom resource (CR). A CNI configuration inside each of the NetworkAttachmentDefinition defines how that interface is created. OpenShift Data Foundation uses the CNI plug-in called macvlan. Creating a macvlan-based additional network allows pods on a host to communicate with other hosts and pods on those hosts using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. 7.1. Creating network attachment definitions To utilize Multus, an already working cluster with the correct networking configuration is required, see Requirements for Multus configuration . The newly created NetworkAttachmentDefinition (NAD) can be selected during the Storage Cluster installation. This is the reason they must be created before the Storage Cluster. Note Network attachment definitions can only use the whereabouts IP address management (IPAM), and it must specify the range field. ipRanges and plugin chaining are not supported. You can select the newly created NetworkAttachmentDefinition (NAD) during the Storage Cluster installation. This is the reason you must create the NAD before you create the Storage Cluster. As detailed in the Planning Guide, the Multus networks you create depend on the number of available network interfaces you have for OpenShift Data Foundation traffic. It is possible to separate all of the storage traffic onto one of the two interfaces (one interface used for default OpenShift SDN) or to further segregate storage traffic into client storage traffic (public) and storage replication traffic (private or cluster). The following is an example NetworkAttachmentDefinition for all the storage traffic, public and cluster, on the same interface. It requires one additional interface on all schedulable nodes (OpenShift default SDN on separate network interface): Note All network interface names must be the same on all the nodes attached to the Multus network (that is, ens2 for ocs-public-cluster ). The following is an example NetworkAttachmentDefinition for storage traffic on separate Multus networks, public, for client storage traffic, and cluster, for replication traffic. It requires two additional interfaces on OpenShift nodes hosting object storage device (OSD) pods and one additional interface on all other schedulable nodes (OpenShift default SDN on separate network interface): Example NetworkAttachmentDefinition : Note All network interface names must be the same on all the nodes attached to the Multus networks (that is, ens2 for ocs-public , and ens3 for ocs-cluster ). Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads -> Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads -> Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads -> Deployments . Click Workloads -> Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page. Chapter 9. Adding file and object storage to an existing external OpenShift Data Foundation cluster When OpenShift Data Foundation is configured in external mode, there are several ways to provide storage for persistent volume claims and object bucket claims. Persistent volume claims for block storage are provided directly from the external Red Hat Ceph Storage cluster. Persistent volume claims for file storage can be provided by adding a Metadata Server (MDS) to the external Red Hat Ceph Storage cluster. Object bucket claims for object storage can be provided either by using the Multicloud Object Gateway or by adding the Ceph Object Gateway to the external Red Hat Ceph Storage cluster. Use the following process to add file storage (using Metadata Servers) or object storage (using Ceph Object Gateway) or both to an external OpenShift Data Foundation cluster that was initially deployed to provide only block storage. Prerequisites OpenShift Data Foundation 4.17 is installed and running on the OpenShift Container Platform version 4.17 or above. Also, the OpenShift Data Foundation Cluster in external mode is in the Ready state. Your external Red Hat Ceph Storage cluster is configured with one or both of the following: a Ceph Object Gateway (RGW) endpoint that can be accessed by the OpenShift Container Platform cluster for object storage a Metadata Server (MDS) pool for file storage Ensure that you know the parameters used with the ceph-external-cluster-details-exporter.py script during external OpenShift Data Foundation cluster deployment. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using one of the following methods, either CSV or ConfigMap. Important Downloading the ceph-external-cluster-details-exporter.py python script using CSV will no longer be supported from version OpenShift Data Foundation 4.19 and onward. Using the ConfigMap will be the only supported method. CSV ConfigMap Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. Generate and save configuration details from the external Red Hat Ceph Storage cluster. Generate configuration details by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-endpoint Provide this parameter to provision object storage through Ceph Object Gateway for OpenShift Data Foundation. (optional parameter) --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. User permissions are updated as shown: Note Ensure that all the parameters (including the optional arguments) except the Ceph Object Gateway details (if provided), are the same as what was used during the deployment of OpenShift Data Foundation in external mode. Save the output of the script in an external-cluster-config.json file. The following example output shows the generated configuration changes in bold text. Upload the generated JSON file. Log in to the OpenShift web console. Click Workloads -> Secrets . Set project to openshift-storage . Click on rook-ceph-external-cluster-details . Click Actions (...) -> Edit Secret Click Browse and upload the external-cluster-config.json file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data foundation -> Storage Systems tab and then click on the storage system name. On the Overview -> Block and File tab, check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. If you added a Metadata Server for file storage: Click Workloads -> Pods and verify that csi-cephfsplugin-* pods are created new and are in the Running state. Click Storage -> Storage Classes and verify that the ocs-external-storagecluster-cephfs storage class is created. If you added the Ceph Object Gateway for object storage: Click Storage -> Storage Classes and verify that the ocs-external-storagecluster-ceph-rgw storage class is created. To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data foundation -> Storage Systems tab and then click on the storage system name. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. Chapter 10. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 10.3, "Manual creation of infrastructure nodes" section for more information. 10.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: Openshift-dns daemonsets doesn't include toleration to run on nodes with taints . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 10.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 10.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 10.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute -> Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute -> Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere . Chapter 11. Managing Persistent Volume Claims 11.1. Configuring application pods to use OpenShift Data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for an application pod. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. The default storage classes provided by OpenShift Data Foundation are available. In OpenShift Web Console, click Storage -> StorageClasses to view default storage classes. Procedure Create a Persistent Volume Claim (PVC) for the application to use. In OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project for the application pod. Click Create Persistent Volume Claim . Specify a Storage Class provided by OpenShift Data Foundation. Specify the PVC Name , for example, myclaim . Select the required Access Mode . Note The Access Mode , Shared access (RWX) is not supported in IBM FlashSystem. For Rados Block Device (RBD), if the Access mode is ReadWriteOnce ( RWO ), select the required Volume mode . The default volume mode is Filesystem . Specify a Size as per application requirement. Click Create and wait until the PVC is in Bound status. Configure a new or existing application pod to use the new PVC. For a new application pod, perform the following steps: Click Workloads -> Pods . Create a new application pod. Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod. For example: For an existing application pod, perform the following steps: Click Workloads -> Deployment Configs . Search for the required deployment config associated with the application pod. Click on its Action menu (...) -> Edit Deployment Config . Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod and click Save . For example: Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project for the application pod. Verify that the application pod appears with a status of Running . Click the application pod name to view pod details. Scroll down to Volumes section and verify that the volume has a Type that matches your new Persistent Volume Claim, for example, myclaim . 11.2. Viewing Persistent Volume Claim request status Use this procedure to view the status of a PVC request. Prerequisites Administrator access to OpenShift Data Foundation. Procedure Log in to OpenShift Web Console. Click Storage -> Persistent Volume Claims Search for the required PVC name by using the Filter textbox. You can also filter the list of PVCs by Name or Label to narrow down the list Check the Status column corresponding to the required PVC. Click the required Name to view the PVC details. 11.3. Reviewing Persistent Volume Claim request events Use this procedure to review and address Persistent Volume Claim (PVC) request events. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click Overview -> Block and File . Locate the Inventory card to see the number of PVCs with errors. Click Storage -> Persistent Volume Claims Search for the required PVC using the Filter textbox. Click on the PVC name and navigate to Events Address the events as required or as directed. 11.4. Expanding Persistent Volume Claims OpenShift Data Foundation 4.6 onwards has the ability to expand Persistent Volume Claims providing more flexibility in the management of persistent storage resources. Expansion is supported for the following Persistent Volumes: PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph File System (CephFS) for volume mode Filesystem . PVC with ReadWriteOnce (RWO) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Filesystem . PVC with ReadWriteOnce (RWO) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Block . PVC with ReadWriteOncePod (RWOP) that is based on Ceph File System (CephFS) or Network File System (NFS) for volume mode Filesystem . PVC with ReadWriteOncePod (RWOP) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Filesystem . With RWOP access mode, you mount the volume as read-write by a single pod on a single node. Note PVC expansion is not supported for OSD, MON and encrypted PVCs. Prerequisites Administrator access to OpenShift Web Console. Procedure In OpenShift Web Console, navigate to Storage -> Persistent Volume Claims . Click the Action Menu (...) to the Persistent Volume Claim you want to expand. Click Expand PVC : Select the new size of the Persistent Volume Claim, then click Expand : To verify the expansion, navigate to the PVC's details page and verify the Capacity field has the correct size requested. Note When expanding PVCs based on Ceph RADOS Block Devices (RBDs), if the PVC is not already attached to a pod the Condition type is FileSystemResizePending in the PVC's details page. Once the volume is mounted, filesystem resize succeeds and the new size is reflected in the Capacity field. 11.5. Dynamic provisioning 11.5.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. Storage plug-ins might support static provisioning, dynamic provisioning or both provisioning types. 11.5.2. Dynamic provisioning in OpenShift Data Foundation Red Hat OpenShift Data Foundation is software-defined storage that is optimised for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. OpenShift Data Foundation supports a variety of storage types, including: Block storage for databases Shared file storage for continuous integration, messaging, and data aggregation Object storage for archival, backup, and media storage Version 4 uses Red Hat Ceph Storage to provide the file, block, and object storage that backs persistent volumes, and Rook.io to manage and orchestrate provisioning of persistent volumes and claims. NooBaa provides object storage, and its Multicloud Gateway allows object federation across multiple cloud environments (available as a Technology Preview). In OpenShift Data Foundation 4, the Red Hat Ceph Storage Container Storage Interface (CSI) driver for RADOS Block Device (RBD) and Ceph File System (CephFS) handles the dynamic provisioning requests. When a PVC request comes in dynamically, the CSI driver has the following options: Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph RBDs with volume mode Block . Create a PVC with ReadWriteOnce (RWO) access that is based on Ceph RBDs with volume mode Filesystem . Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on CephFS for volume mode Filesystem . Create a PVC with ReadWriteOncePod (RWOP) access that is based on CephFS,NFS and RBD. With RWOP access mode, you mount the volume as read-write by a single pod on a single node. The judgment of which driver (RBD or CephFS) to use is based on the entry in the storageclass.yaml file. 11.5.3. Available dynamic provisioning plug-ins OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plug-in name Notes OpenStack Cinder kubernetes.io/cinder AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. AWS Elastic File System (EFS) Dynamic provisioning is accomplished through the EFS provisioner pod and not through a provisioner plug-in. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder ServiceAccount requires permissions to create and get Secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Red Hat Virtualization csi.ovirt.org Important Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. Chapter 12. Reclaiming space on target volumes The deleted files or chunks of zero data sometimes take up storage space on the Ceph cluster resulting in inaccurate reporting of the available storage space. The reclaim space operation removes such discrepancies by executing the following operations on the target volume: fstrim - This operation is used on volumes that are in Filesystem mode and only if the volume is mounted to a pod at the time of execution of reclaim space operation. rbd sparsify - This operation is used when the volume is not attached to any pods and reclaims the space occupied by chunks of 4M-sized zeroed data. Note Only the Ceph RBD volumes support the reclaim space operation. The reclaim space operation involves a performance penalty when it is being executed. You can use one of the following methods to reclaim the space: Enabling reclaim space operation using annotating PersistentVolumeClaims (Recommended method to use for enabling reclaim space operation) Enabling reclaim space operation using ReclaimSpaceJob Enabling reclaim space operation using ReclaimSpaceCronJob 12.1. Enabling reclaim space operation by annotating PersistentVolumeClaims Use this procedure to automatically invoke the reclaim space operation to annotate persistent volume claim (PVC) based on a given schedule. Note The schedule value is in the same format as the Kubernetes CronJobs which sets the and/or interval of the recurring operation request. Recommended schedule interval is @weekly . If the schedule interval value is empty or in an invalid format, then the default schedule value is set to @weekly . Do not schedule multiple ReclaimSpace operations @weekly or at the same time. Minimum supported interval between each scheduled operation is at least 24 hours. For example, @daily (At 00:00 every day) or 0 3 * * * (At 3:00 every day). Schedule the ReclaimSpace operation during off-peak, maintenance window, or the interval when the workload input/output is expected to be low. ReclaimSpaceCronJob is recreated when the schedule is modified. It is automatically deleted when the annotation is removed. Procedure Get the PVC details. Add annotation reclaimspace.csiaddons.openshift.io/schedule=@monthly to the PVC to create reclaimspacecronjob . Verify that reclaimspacecronjob is created in the format, "<pvc-name>-xxxxxxx" . Modify the schedule to run this job automatically. Verify that the schedule for reclaimspacecronjob has been modified. 12.2. Disabling reclaim space for a specific PersistentVolumeClaim To disable reclaim space for a specific PersistentVolumeClaim (PVC), modify the associated ReclaimSpaceCronJob custom resource (CR). Identify the ReclaimSpaceCronJob CR for the PVC you want to disable reclaim space on: Replace "<PVC_NAME>" with the name of the PVC. Apply the following to the ReclaimSpaceCronJob CR from step 1 to disable the reclaim space: Update the csiaddons.openshift.io/state annotation from "managed" to "unmanaged" Replace <RECLAIMSPACECRONJOB_NAME> with the ReclaimSpaceCronJob CR. Add suspend: true under the spec field: 12.3. Enabling reclaim space operation using ReclaimSpaceJob ReclaimSpaceJob is a namespaced custom resource (CR) designed to invoke reclaim space operation on the target volume. This is a one time method that immediately starts the reclaim space operation. You have to repeat the creation of ReclaimSpaceJob CR to repeat the reclaim space operation when required. Note Recommended interval between the reclaim space operations is weekly . Ensure that the minimum interval between each operation is at least 24 hours . Schedule the reclaim space operation during off-peak, maintenance window, or when the workload input/output is expected to be low. Procedure Create and apply the following custom resource for reclaim space operation: where, target Indicates the volume target on which the operation is performed. persistentVolumeClaim Name of the PersistentVolumeClaim . backOfflimit Specifies the maximum number of retries before marking the reclaim space operation as failed . The default value is 6 . The allowed maximum and minimum values are 60 and 0 respectively. retryDeadlineSeconds Specifies the duration in which the operation might retire in seconds and it is relative to the start time. The value must be a positive integer. The default value is 600 seconds and the allowed maximum value is 1800 seconds. timeout Specifies the timeout in seconds for the grpc request sent to the CSI driver. If the timeout value is not specified, it defaults to the value of global reclaimspace timeout. Minimum allowed value for timeout is 60. Delete the custom resource after completion of the operation. 12.4. Enabling reclaim space operation using ReclaimSpaceCronJob ReclaimSpaceCronJob invokes the reclaim space operation based on the given schedule such as daily, weekly, and so on. You have to create ReclaimSpaceCronJob only once for a persistent volume claim. The CSI-addons controller creates a ReclaimSpaceJob at the requested time and interval with the schedule attribute. Note Recommended schedule interval is @weekly . Minimum interval between each scheduled operation should be at least 24 hours. For example, @daily (At 00:00 every day) or "0 3 * * *" (At 3:00 every day). Schedule the ReclaimSpace operation during off-peak, maintenance window, or the interval when workload input/output is expected to be low. Procedure Create and apply the following custom resource for reclaim space operation where, concurrencyPolicy Describes the changes when a new ReclaimSpaceJob is scheduled by the ReclaimSpaceCronJob , while a ReclaimSpaceJob is still running. The default Forbid prevents starting a new job whereas Replace can be used to delete the running job potentially in a failure state and create a new one. failedJobsHistoryLimit Specifies the number of failed ReclaimSpaceJobs that are kept for troubleshooting. jobTemplate Specifies the ReclaimSpaceJob.spec structure that describes the details of the requested ReclaimSpaceJob operation. successfulJobsHistoryLimit Specifies the number of successful ReclaimSpaceJob operations. schedule Specifieds the and/or interval of the recurring operation request and it is in the same format as the Kubernetes CronJobs . Delete the ReclaimSpaceCronJob custom resource when execution of reclaim space operation is no longer needed or when the target PVC is deleted. 12.5. Customising timeouts required for Reclaim Space Operation Depending on the RBD volume size and its data pattern, Reclaim Space Operation might fail with the context deadline exceeded error. You can avoid this by increasing the timeout value. The following example shows the failed status by inspecting -o yaml of the corresponding ReclaimSpaceJob : Example You can also set custom timeouts at global level by creating the following configmap : Example Restart the csi-addons operator pod. All Reclaim Space Operations started after the above configmap creation use the customized timeout. ' :leveloffset: +1 Chapter 13. Finding and cleaning stale subvolumes (Technology Preview) Sometimes stale subvolumes don't have a respective k8s reference attached. These subvolumes are of no use and can be deleted. You can find and delete stale subvolumes using the ODF CLI tool. Important Deleting stale subvolumes using the ODF CLI tool is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Find the stale subvolumes by using the --stale flag with the subvolumes command: Example output: Delete the stale subvolumes: Replace <subvolumes> with a comma separated list of subvolumes from the output of the first command. The subvolumes must be of the same filesystem and subvolumegroup. Replace <filesystem> and <subvolumegroup> with the filesystem and subvolumegroup from the output of the first command. For example: Example output: Chapter 14. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. Volume snapshot class allows an administrator to specify different attributes belonging to a volume snapshot object. The OpenShift Data Foundation operator installs default volume snapshot classes depending on the platform in use. The operator owns and controls these default volume snapshot classes and they cannot be deleted or modified. You can create many snapshots of the same persistent volume claim (PVC) but cannot schedule periodic creation of snapshots. For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note Persistent Volume encryption now supports volume snapshots. 14.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) -> Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions -> Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage -> Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 14.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage -> Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 14.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) -> Delete Volume Snapshot . From Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) -> Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage -> Volume Snapshots and ensure that the deleted volume snapshot is not listed. Chapter 15. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 15.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) -> Clone PVC . Click on the PVC that you want to clone and click Actions -> Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Enter the required size of the clone. Select the storage class in which you want to create the clone. The storage class can be any RBD storage class and it need not necessarily be the same as the parent PVC. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC. Chapter 16. Managing container storage interface (CSI) component placements Each cluster consists of a number of dedicated nodes such as infra and storage nodes. However, an infra node with a custom taint will not be able to use OpenShift Data Foundation Persistent Volume Claims (PVCs) on the node. So, if you want to use such nodes, you can set tolerations to bring up csi-plugins on the nodes. Procedure Edit the configmap to add the toleration for the custom taint. Remember to save before exiting the editor. Display the configmap to check the added toleration. Example output of the added toleration for the taint, nodetype=infra:NoSchedule : Note Ensure that all non-string values in the Tolerations value field has double quotation marks. For example, the values true which is of type boolean, and 1 which is of type int must be input as "true" and "1". Restart the rook-ceph-operator if the csi-cephfsplugin- * and csi-rbdplugin- * pods fail to come up on their own on the infra nodes. Example : Verification step Verify that the csi-cephfsplugin- * and csi-rbdplugin- * pods are running on the infra nodes. Chapter 17. Using 2-way replication with CephFS To reduce storage overhead with CephFS when data resiliency is not a primary concern, you can opt for using 2-way replication (replica-2). This reduces the amount of storage space used and decreases the level of fault tolerance. There are two ways to use replica-2 for CephFS: Edit the existing default pool to replica-2 and use it with the default CephFS storageclass . Add an additional CephFS data pool with replica-2 . 17.1. Editing the existing default CephFS data pool to replica-2 Use this procedure to edit the existing default CephFS pool to replica-2 and use it with the default CephFS storageclass. Procedure Patch the storagecluster to change default CephFS data pool to replica-2. Check the pool details. 17.2. Adding an additional CephFS data pool with replica-2 Use this procedure to add an additional CephFS data pool with replica-2. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage -> StorageClasses -> Create Storage Class . Select CephFS Provisioner . Under Storage Pool , click Create new storage pool . Fill in the Create Storage Pool fields. Under Data protection policy , select 2-way Replication . Confirm Storage Pool creation In the Storage Class creation form, choose the newly created Storage Pool. Confirm the Storage Class creation. Verification Click Storage -> Data Foundation . In the Storage systems tab, select the new storage system. The Details tab of the storage system reflect the correct volume and device types you chose during creation Chapter 18. Creating exports using NFS This section describes how to create exports using NFS that can then be accessed externally from the OpenShift cluster. Follow the instructions below to create exports and access them externally from the OpenShift Cluster: Section 18.1, "Enabling the NFS feature" Section 18.2, "Creating NFS exports" Section 18.3, "Consuming NFS exports in-cluster" Section 18.4, "Consuming NFS exports externally from the OpenShift cluster" 18.1. Enabling the NFS feature To use NFS feature, you need to enable it in the storage cluster using the command-line interface (CLI) after the cluster is created. You can also enable the NFS feature while creating the storage cluster using the user interface. Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. The OpenShift Data Foundation installation includes a CephFilesystem. Procedure Run the following command to enable the NFS feature from CLI: Verification steps NFS installation and configuration is complete when the following conditions are met: The CephNFS resource named ocs-storagecluster-cephnfs has a status of Ready . Check if all the csi-nfsplugin-* pods are running: Output has multiple pods. For example: 18.2. Creating NFS exports NFS exports are created by creating a Persistent Volume Claim (PVC) against the ocs-storagecluster-ceph-nfs StorageClass. You can create NFS PVCs two ways: Create NFS PVC using a yaml. The following is an example PVC. Note volumeMode: Block will not work for NFS volumes. <desired_name> Specify a name for the PVC, for example, my-nfs-export . The export is created once the PVC reaches the Bound state. Create NFS PVCs from the OpenShift Container Platform web console. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and the NFS feature is enabled for the storage cluster. Procedure In the OpenShift Web Console, click Storage -> Persistent Volume Claims Set the Project to openshift-storage . Click Create PersistentVolumeClaim . Specify Storage Class , ocs-storagecluster-ceph-nfs . Specify the PVC Name , for example, my-nfs-export . Select the required Access Mode . Specify a Size as per application requirement. Select Volume mode as Filesystem . Note: Block mode is not supported for NFS PVCs Click Create and wait until the PVC is in Bound status. 18.3. Consuming NFS exports in-cluster Kubernetes application pods can consume NFS exports created by mounting a previously created PVC. You can mount the PVC one of two ways: Using a YAML: Below is an example pod that uses the example PVC created in Section 18.2, "Creating NFS exports" : <pvc_name> Specify the PVC you have previously created, for example, my-nfs-export . Using the OpenShift Container Platform web console. Procedure On the OpenShift Container Platform web console, navigate to Workloads -> Pods . Click Create Pod to create a new application pod. Under the metadata section add a name. For example, nfs-export-example , with namespace as openshift-storage . Under the spec: section, add containers: section with image and volumeMounts sections: For example: Under the spec: section, add volumes: section to add the NFS PVC as a volume for the application pod: For example: 18.4. Consuming NFS exports externally from the OpenShift cluster NFS clients outside of the OpenShift cluster can mount NFS exports created by a previously-created PVC. Procedure After the nfs flag is enabled, singe-server CephNFS is deployed by Rook. You need to fetch the value of the ceph_nfs field for the nfs-ganesha server to use in the step: For example: Expose the NFS server outside of the OpenShift cluster by creating a Kubernetes LoadBalancer Service. The example below creates a LoadBalancer Service and references the NFS server created by OpenShift Data Foundation. Replace <my-nfs> with the value you got in step 1. Collect connection information. The information external clients need to connect to an export comes from the Persistent Volume (PV) created for the PVC, and the status of the LoadBalancer Service created in the step. Get the share path from the PV. Get the name of the PV associated with the NFS export's PVC: Replace <pvc_name> with your own PVC name. For example: Use the PV name obtained previously to get the NFS export's share path: Get an ingress address for the NFS server. A service's ingress status may have multiple addresses. Choose the one desired to use for external clients. In the example below, there is only a single address: the host name ingress-id.somedomain.com . Connect the external client using the share path and ingress address from the steps. The following example mounts the export to the client's directory path /export/mount/path : If this does not work immediately, it could be that the Kubernetes environment is still taking time to configure the network resources to allow ingress to the NFS server. Chapter 19. Annotating encrypted RBD storage classes Starting with OpenShift Data Foundation 4.14, when the OpenShift console creates a RADOS block device (RBD) storage class with encryption enabled, the annotation is set automatically. However, you need to add the annotation, cdi.kubevirt.io/clone-strategy=copy for any of the encrypted RBD storage classes that were previously created before updating to the OpenShift Data Foundation version 4.14. This enables customer data integration (CDI) to use host-assisted cloning instead of the default smart cloning. The keys used to access an encrypted volume are tied to the namespace where the volume was created. When cloning an encrypted volume to a new namespace, such as, provisioning a new OpenShift Virtualization virtual machine, a new volume must be created and the content of the source volume must then be copied into the new volume. This behavior is triggered automatically if the storage class is properly annotated. Chapter 20. Enabling faster client IO or recovery IO during OSD backfill During a maintenance window, you may want to favor either client IO or recovery IO. Favoring recovery IO over client IO will significantly reduce OSD recovery time. The valid recovery profile options are balanced , high_client_ops , and high_recovery_ops . Set the recovery profile using the following procedure. Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Check the current recovery profile: Modify the recovery profile: Replace option with either balanced , high_client_ops , or high_recovery_ops . Verify the updated recovery profile: Chapter 21. Setting Ceph OSD full thresholds You can set Ceph OSD full thresholds using the ODF CLI tool or by updating the StorageCluster CR. 21.1. Setting Ceph OSD full thresholds using the ODF CLI tool You can set Ceph OSD full thresholds temporarily by using the ODF CLI tool. This is necessary in cases when the cluster gets into a full state and the thresholds need to be immediately increased. Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Use the set command to adjust Ceph full thresholds. The set command supports the subcommands full , backfillfull , and nearfull . See the following examples for how to use each subcommand. full This subcommand allows updating the Ceph OSD full ratio in case Ceph prevents the IO operation on OSDs that reached the specified capacity. The default is 0.85 . Note If the value is set too close to 1.0 , the cluster becomes unrecoverable if the OSDs are full and there is nowhere to grow. For example, set Ceph OSD full ratio to 0.9 and then add capacity: For instructions to add capacity for you specific use case, see the Scaling storage guide . If OSDs continue to be in stuck , pending , or do not come up at all: Stop all IOs. Increase the full ratio to 0.92 : Wait for the cluster rebalance to happen. Once cluster rebalance is complete, change the full ratio back to its original value of 0.85: backfillfull This subcommand allows updating the Ceph OSDd backfillfull ratio in case Ceph denies backfilling to the OSD that reached the capacity specified. The default value is 0.80 . Note If the value is set too close to 1.0 , the OSDs become full and the cluster is not able to backfill. For example, to set backfillfull to 0.85 : nearfull This subcommand allows updating the Ceph OSD nearfull ratio in case Ceph returns the nearfull OSDs message when the cluster reaches the capacity specified. The default value is 0.75 . For example, to set nearfull to 0.8 : 21.2. Setting Ceph OSD full thresholds by updating the StorageCluster CR You can set Ceph OSD full thresholds by updating the StorageCluster CR. Use this procedure if you want to override the default settings. Procedure You can update the StorageCluster CR to change the settings for full , backfillfull , and nearfull . full Use this following command to update the Ceph OSD full ratio in case Ceph prevents the IO operation on OSDs that reached the specified capacity. The default is 0.85 . Note If the value is set too close to 1.0 , the cluster becomes unrecoverable if the OSDs are full and there is nowhere to grow. For example, to set Ceph OSD full ratio to 0.9 : backfillfull Use the following command to set the Ceph OSDd backfillfull ratio in case Ceph denies backfilling to the OSD that reached the capacity specified. The default value is 0.80 . Note If the value is set too close to 1.0 , the OSDs become full and the cluster is not able to backfill. For example, set backfill full to 0.85 : nearfull Use the following command to set the Ceph OSD nearfull ratio in case Ceph returns the nearfull OSDs message when the cluster reaches the capacity specified. The default value is 0.75 . For example, set nearfull to 0.8 : | [
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephNonResilientPools/enable\", \"value\": true }]'",
"oc get storagecluster",
"NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 10m Ready 2024-02-05T13:56:15Z 4.17.0",
"oc get cephblockpools",
"NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-east-1a Ready ocs-storagecluster-cephblockpool-us-east-1b Ready ocs-storagecluster-cephblockpool-us-east-1c Ready",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 104m gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m ocs-storagecluster-ceph-non-resilient-rbd openshift-storage.rbd.csi.ceph.com Delete WaitForFirstConsumer true 46m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 52m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 52m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 50m",
"oc get pods | grep osd",
"rook-ceph-osd-0-6dc76777bc-snhnm 2/2 Running 0 9m50s rook-ceph-osd-1-768bdfdc4-h5n7k 2/2 Running 0 9m48s rook-ceph-osd-2-69878645c4-bkdlq 2/2 Running 0 9m37s rook-ceph-osd-3-64c44d7d76-zfxq9 2/2 Running 0 5m23s rook-ceph-osd-4-654445b78f-nsgjb 2/2 Running 0 5m23s rook-ceph-osd-5-5775949f57-vz6jp 2/2 Running 0 5m22s rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0x6t87-59swf 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0klwr7-bk45t 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0mk2cz-jx7zv 0/1 Completed 0 10m",
"oc get cephblockpools",
"NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-south-1 Ready ocs-storagecluster-cephblockpool-us-south-2 Ready ocs-storagecluster-cephblockpool-us-south-3 Ready",
"oc get pods -n openshift-storage -l app=rook-ceph-osd | grep 'CrashLoopBackOff\\|Error'",
"failed_osd_id=0 #replace with the ID of the failed OSD",
"failure_domain_label=USD(oc get storageclass ocs-storagecluster-ceph-non-resilient-rbd -o yaml | grep domainLabel |head -1 |awk -F':' '{print USD2}')",
"failure_domain_value=USD\"(oc get pods USDfailed_osd_id -oyaml |grep topology-location-zone |awk '{print USD2}')\"",
"replica1-pool-name= \"ocs-storagecluster-cephblockpool-USDfailure_domain_value\"",
"toolbox=USD(oc get pod -l app=rook-ceph-tools -n openshift-storage -o jsonpath='{.items[*].metadata.name}') rsh USDtoolbox -n openshift-storage",
"ceph osd pool rm <replica1-pool-name> <replica1-pool-name> --yes-i-really-really-mean-it",
"oc delete pod -l rook-ceph-operator -n openshift-storage",
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: ceph-csi-vault-sa EOF",
"apiVersion: v1 kind: ServiceAccount metadata: name: rbd-csi-vault-token-review --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review rules: - apiGroups: [\"authentication.k8s.io\"] resources: [\"tokenreviews\"] verbs: [\"create\", \"get\", \"list\"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review subjects: - kind: ServiceAccount name: rbd-csi-vault-token-review namespace: openshift-storage roleRef: kind: ClusterRole name: rbd-csi-vault-token-review apiGroup: rbac.authorization.k8s.io",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: rbd-csi-vault-token-review-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: \"rbd-csi-vault-token-review\" type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"vault auth enable kubernetes vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault write \"auth/kubernetes/role/csi-kubernetes\" bound_service_account_names=\"ceph-csi-vault-sa\" bound_service_account_namespaces=<tenant_namespace> policies=<policy_name_in_vault>",
"apiVersion: v1 data: vault-tenant-sa: |- { \"encryptionKMSType\": \"vaulttenantsa\", \"vaultAddress\": \"<https://hostname_or_ip_of_vault_server:port>\", \"vaultTLSServerName\": \"<vault TLS server name>\", \"vaultAuthPath\": \"/v1/auth/kubernetes/login\", \"vaultAuthNamespace\": \"<vault auth namespace name>\" \"vaultNamespace\": \"<vault namespace name>\", \"vaultBackendPath\": \"<vault backend path name>\", \"vaultCAFromSecret\": \"<secret containing CA cert>\", \"vaultClientCertFromSecret\": \"<secret containing client cert>\", \"vaultClientCertKeyFromSecret\": \"<secret containing client private key>\", \"tenantSAName\": \"<service account name in the tenant namespace>\" } metadata: name: csi-kms-connection-details",
"encryptionKMSID: 1-vault",
"kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }",
"--- apiVersion: v1 kind: ConfigMap metadata: name: ceph-csi-kms-config data: vaultAddress: \"<vault_address:port>\" vaultBackendPath: \"<backend_path>\" vaultTLSServerName: \"<vault_tls_server_name>\" vaultNamespace: \"<vault_namespace>\"",
"oc get namespace default NAME STATUS AGE default Active 5d2h",
"oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/enable: false\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'",
"oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true",
"oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{\"spec\": {\"suspend\": true}}' --type=merge.",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": true}}} }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o yaml | yq '.spec.network' connections: encryption: enabled: true",
"oc get pods -n openshift-storage | grep rook-ceph rook-ceph-crashcollector-ip-10-0-2-111.ec2.internal-796ffcm9kn9 1/1 Running 0 5m11s rook-ceph-crashcollector-ip-10-0-27-61.ec2.internal-854b4d8sk5z 1/1 Running 0 5m9s rook-ceph-crashcollector-ip-10-0-33-53.ec2.internal-589d9f4f8vx 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-2-111.ec2.internal-6d48cdc5fd-2tmsl 1/1 Running 0 5m9s rook-ceph-exporter-ip-10-0-27-61.ec2.internal-546c66c7cc-9lnpz 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-33-53.ec2.internal-b5555994c-x8mzz 1/1 Running 0 5m5s rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-7bd754f6vwps2 2/2 Running 0 4m56s rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-6cc5cc647c78m 2/2 Running 0 4m30s rook-ceph-mgr-a-6f8467578d-f8279 3/3 Running 0 3m40s rook-ceph-mgr-b-66754d99cf-9q58g 3/3 Running 0 3m27s rook-ceph-mon-a-75bc5dd655-tvdqf 2/2 Running 0 4m7s rook-ceph-mon-b-6b6d4d9b4c-tjbpz 2/2 Running 0 4m55s rook-ceph-mon-c-7456bb5f67-rtwpj 2/2 Running 0 4m32s rook-ceph-operator-7b5b9cdb9b-tvmb6 1/1 Running 0 45m rook-ceph-osd-0-b78dd99f6-n4wbm 2/2 Running 0 3m3s rook-ceph-osd-1-5887bf6d8d-2sncc 2/2 Running 0 2m39s rook-ceph-osd-2-784b59c4c8-44phh 2/2 Running 0 2m14s rook-ceph-osd-prepare-a075cf185c9b2e5d92ec3f7769565e38-ztrms 0/1 Completed 0 42m rook-ceph-osd-prepare-b4b48dc5e3bef99ab377e2a255a9142a-mvgnd 0/1 Completed 0 42m rook-ceph-osd-prepare-fae2ea2ad4aacbf62010ae5b60b87f57-6t9l5 0/1 Completed 0 42m",
"oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 27m Ready 2024-11-06T16:15:26Z 4.18.0",
"~ USD oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": false}}} }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o yaml | yq '.spec.network' connections: encryption: enabled: false",
"oc get pods -n openshift-storage | grep rook-ceph rook-ceph-crashcollector-ip-10-0-2-111.ec2.internal-796ffcm9kn9 1/1 Running 0 5m11s rook-ceph-crashcollector-ip-10-0-27-61.ec2.internal-854b4d8sk5z 1/1 Running 0 5m9s rook-ceph-crashcollector-ip-10-0-33-53.ec2.internal-589d9f4f8vx 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-2-111.ec2.internal-6d48cdc5fd-2tmsl 1/1 Running 0 5m9s rook-ceph-exporter-ip-10-0-27-61.ec2.internal-546c66c7cc-9lnpz 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-33-53.ec2.internal-b5555994c-x8mzz 1/1 Running 0 5m5s rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-7bd754f6vwps2 2/2 Running 0 4m56s rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-6cc5cc647c78m 2/2 Running 0 4m30s rook-ceph-mgr-a-6f8467578d-f8279 3/3 Running 0 3m40s rook-ceph-mgr-b-66754d99cf-9q58g 3/3 Running 0 3m27s rook-ceph-mon-a-75bc5dd655-tvdqf 2/2 Running 0 4m7s rook-ceph-mon-b-6b6d4d9b4c-tjbpz 2/2 Running 0 4m55s rook-ceph-mon-c-7456bb5f67-rtwpj 2/2 Running 0 4m32s rook-ceph-operator-7b5b9cdb9b-tvmb6 1/1 Running 0 45m rook-ceph-osd-0-b78dd99f6-n4wbm 2/2 Running 0 3m3s rook-ceph-osd-1-5887bf6d8d-2sncc 2/2 Running 0 2m39s rook-ceph-osd-2-784b59c4c8-44phh 2/2 Running 0 2m14s rook-ceph-osd-prepare-a075cf185c9b2e5d92ec3f7769565e38-ztrms 0/1 Completed 0 42m rook-ceph-osd-prepare-b4b48dc5e3bef99ab377e2a255a9142a-mvgnd 0/1 Completed 0 42m rook-ceph-osd-prepare-fae2ea2ad4aacbf62010ae5b60b87f57-6t9l5 0/1 Completed 0 42m",
"oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 27m Ready 2024-11-06T16:15:26Z 4.18.0",
"oc patch storagecluster ocs-external-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": true}}} }]' storagecluster.ocs.openshift.io/ocs-external-storagecluster patched",
"get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 9h Ready true 2024-11-06T20:48:03Z 4.18.0",
"oc get storagecluster ocs-external-storagecluster -o yaml | yq '.spec.network.connections' encryption: enabled: true",
"root@ceph-client ~]# ceph config set global ms_client_mode secure ceph config set global ms_cluster_mode secure ceph config set global ms_service_mode secure ceph config set global rbd_default_map_options ms_mode=secure",
"ceph config dump | grep ms_ ceph config dump | grep ms_ global basic ms_client_mode secure * global basic ms_cluster_mode secure * global basic ms_service_mode secure * global advanced rbd_default_map_options ms_mode=secure *",
"ceph orch ls --format plain | tail -n +2 | awk '{print USD1}' | xargs -I {} ceph orch restart {} Scheduled to restart alertmanager.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-2 on host 'osd-2' Scheduled to restart ceph-exporter.osd-3 on host 'osd-3' Scheduled to restart ceph-exporter.osd-1 on host 'osd-1' Scheduled to restart crash.osd-0 on host 'osd-0' Scheduled to restart crash.osd-2 on host 'osd-2' Scheduled to restart crash.osd-3 on host 'osd-3' Scheduled to restart crash.osd-1 on host 'osd-1' Scheduled to restart grafana.osd-0 on host 'osd-0' Scheduled to restart mds.fsvol001.osd-0.lpciqk on host 'osd-0' Scheduled to restart mds.fsvol001.osd-2.wocnxz on host 'osd-2' Scheduled to restart mgr.osd-0.dtkyni on host 'osd-0' Scheduled to restart mgr.osd-2.kqcxwu on host 'osd-2' Scheduled to restart mon.osd-2 on host 'osd-2' Scheduled to restart mon.osd-3 on host 'osd-3' Scheduled to restart mon.osd-1 on host 'osd-1' Scheduled to restart node-exporter.osd-0 on host 'osd-0' Scheduled to restart node-exporter.osd-2 on host 'osd-2' Scheduled to restart node-exporter.osd-3 on host 'osd-3' Scheduled to restart node-exporter.osd-1 on host 'osd-1' Scheduled to restart osd.1 on host 'osd-0' Scheduled to restart osd.4 on host 'osd-0' Scheduled to restart osd.0 on host 'osd-2' Scheduled to restart osd.5 on host 'osd-2' Scheduled to restart osd.2 on host 'osd-3' Scheduled to restart osd.6 on host 'osd-3' Scheduled to restart osd.3 on host 'osd-1' Scheduled to restart osd.7 on host 'osd-1' Scheduled to restart prometheus.osd-0 on host 'osd-0' Scheduled to restart rgw.rgw.ssl.osd-1.smzpfj on host 'osd-1'",
"ceph config rm global ms_client_mode ceph config rm global ms_cluster_mode ceph config rm global ms_service_mode ceph config rm global rbd_default_map_options ceph config dump | grep ms_",
"ceph orch ls --format plain | tail -n +2 | awk '{print USD1}' | xargs -I {} ceph orch restart {} Scheduled to restart alertmanager.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-2 on host 'osd-2' Scheduled to restart ceph-exporter.osd-3 on host 'osd-3' Scheduled to restart ceph-exporter.osd-1 on host 'osd-1' Scheduled to restart crash.osd-0 on host 'osd-0' Scheduled to restart crash.osd-2 on host 'osd-2' Scheduled to restart crash.osd-3 on host 'osd-3' Scheduled to restart crash.osd-1 on host 'osd-1' Scheduled to restart grafana.osd-0 on host 'osd-0' Scheduled to restart mds.fsvol001.osd-0.lpciqk on host 'osd-0' Scheduled to restart mds.fsvol001.osd-2.wocnxz on host 'osd-2' Scheduled to restart mgr.osd-0.dtkyni on host 'osd-0' Scheduled to restart mgr.osd-2.kqcxwu on host 'osd-2' Scheduled to restart mon.osd-2 on host 'osd-2' Scheduled to restart mon.osd-3 on host 'osd-3' Scheduled to restart mon.osd-1 on host 'osd-1' Scheduled to restart node-exporter.osd-0 on host 'osd-0' Scheduled to restart node-exporter.osd-2 on host 'osd-2' Scheduled to restart node-exporter.osd-3 on host 'osd-3' Scheduled to restart node-exporter.osd-1 on host 'osd-1' Scheduled to restart osd.1 on host 'osd-0' Scheduled to restart osd.4 on host 'osd-0' Scheduled to restart osd.0 on host 'osd-2' Scheduled to restart osd.5 on host 'osd-2' Scheduled to restart osd.2 on host 'osd-3' Scheduled to restart osd.6 on host 'osd-3' Scheduled to restart osd.3 on host 'osd-1' Scheduled to restart osd.7 on host 'osd-1' Scheduled to restart prometheus.osd-0 on host 'osd-0' Scheduled to restart rgw.rgw.ssl.osd-1.smzpfj on host 'osd-1'",
"ceph orch ps NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID alertmanager.osd-0 osd-0 *:9093,9094 running (116s) 9s ago 10h 19.5M - 0.26.0 7dbf12091920 4694a72d4bbd ceph-exporter.osd-0 osd-0 running (19s) 9s ago 10h 7310k - 18.2.1-229.el9cp 3fd804e38f5b 49bdc7d99471 ceph-exporter.osd-1 osd-1 running (97s) 26s ago 10h 7285k - 18.2.1-229.el9cp 3fd804e38f5b 7000d59d23b4 ceph-exporter.osd-2 osd-2 running (76s) 26s ago 10h 7306k - 18.2.1-229.el9cp 3fd804e38f5b 3907515cc352 ceph-exporter.osd-3 osd-3 running (49s) 26s ago 10h 6971k - 18.2.1-229.el9cp 3fd804e38f5b 3f3952490780 crash.osd-0 osd-0 running (17s) 9s ago 10h 6878k - 18.2.1-229.el9cp 3fd804e38f5b 38e041fb86e3 crash.osd-1 osd-1 running (96s) 26s ago 10h 6895k - 18.2.1-229.el9cp 3fd804e38f5b 21ce3ef7d896 crash.osd-2 osd-2 running (74s) 26s ago 10h 6899k - 18.2.1-229.el9cp 3fd804e38f5b 210ca9c8d928 crash.osd-3 osd-3 running (47s) 26s ago 10h 6899k - 18.2.1-229.el9cp 3fd804e38f5b 710d42d9d138 grafana.osd-0 osd-0 *:3000 running (114s) 9s ago 10h 72.9M - 10.4.0-pre f142b583a1b1 3dc5e2248e95 mds.fsvol001.osd-0.qjntcu osd-0 running (99s) 9s ago 10h 17.5M - 18.2.1-229.el9cp 3fd804e38f5b 50efa881c04b mds.fsvol001.osd-2.qneujv osd-2 running (51s) 26s ago 10h 15.3M - 18.2.1-229.el9cp 3fd804e38f5b a306f2d2d676 mgr.osd-0.zukgyq osd-0 *:9283,8765,8443 running (21s) 9s ago 10h 442M - 18.2.1-229.el9cp 3fd804e38f5b 8ef9b728675e mgr.osd-1.jqfyal osd-1 *:8443,9283,8765 running (92s) 26s ago 10h 480M - 18.2.1-229.el9cp 3fd804e38f5b 1ab52db89bfd mon.osd-1 osd-1 running (90s) 26s ago 10h 41.7M 2048M 18.2.1-229.el9cp 3fd804e38f5b 88d1fe1e10ac mon.osd-2 osd-2 running (72s) 26s ago 10h 31.1M 2048M 18.2.1-229.el9cp 3fd804e38f5b 02f57d3bb44f mon.osd-3 osd-3 running (45s) 26s ago 10h 24.0M 2048M 18.2.1-229.el9cp 3fd804e38f5b 5e3783f2b4fa node-exporter.osd-0 osd-0 *:9100 running (15s) 9s ago 10h 7843k - 1.7.0 8c904aa522d0 2dae2127349b node-exporter.osd-1 osd-1 *:9100 running (94s) 26s ago 10h 11.2M - 1.7.0 8c904aa522d0 010c3fcd55cd node-exporter.osd-2 osd-2 *:9100 running (69s) 26s ago 10h 17.2M - 1.7.0 8c904aa522d0 436f2d513f31 node-exporter.osd-3 osd-3 *:9100 running (41s) 26s ago 10h 12.4M - 1.7.0 8c904aa522d0 5579f0d494b8 osd.0 osd-0 running (109s) 9s ago 10h 126M 4096M 18.2.1-229.el9cp 3fd804e38f5b 997076cd39d4 osd.1 osd-1 running (85s) 26s ago 10h 139M 4096M 18.2.1-229.el9cp 3fd804e38f5b 08b720f0587d osd.2 osd-2 running (65s) 26s ago 10h 143M 4096M 18.2.1-229.el9cp 3fd804e38f5b 104ad4227163 osd.3 osd-3 running (36s) 26s ago 10h 94.5M 1435M 18.2.1-229.el9cp 3fd804e38f5b db8b265d9f43 osd.4 osd-0 running (104s) 9s ago 10h 164M 4096M 18.2.1-229.el9cp 3fd804e38f5b 50dcbbf7e012 osd.5 osd-1 running (80s) 26s ago 10h 131M 4096M 18.2.1-229.el9cp 3fd804e38f5b 63b21fe970b5 osd.6 osd-3 running (32s) 26s ago 10h 243M 1435M 18.2.1-229.el9cp 3fd804e38f5b 26c7ba208489 osd.7 osd-2 running (61s) 26s ago 10h 130M 4096M 18.2.1-229.el9cp 3fd804e38f5b 871a2b75e64f prometheus.osd-0 osd-0 *:9095 running (12s) 9s ago 10h 44.6M - 2.48.0 58069186198d e49a064d2478 rgw.rgw.ssl.osd-1.bsmbgd osd-1 *:80 running (78s) 26s ago 10h 75.4M - 18.2.1-229.el9cp 3fd804e38f5b d03c9f7ae4a4",
"oc patch storagecluster ocs-external-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": false}}} }]' storagecluster.ocs.openshift.io/ocs-external-storagecluster patched",
"oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 12h Ready true 2024-11-06T20:48:03Z 4.18.0",
"oc get storagecluster ocs-external-storagecluster -o yaml | yq '.spec.network.connections' encryption: enabled: false",
"storage: pvc: claim: <new-pvc-name>",
"storage: pvc: claim: ocs4registry",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=<MCG Accesskey> --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=<MCG Secretkey> --namespace openshift-image-registry",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\": {\"managementState\": \"Managed\"}}'",
"oc describe noobaa",
"oc edit configs.imageregistry.operator.openshift.io -n openshift-image-registry apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: [..] name: cluster spec: [..] storage: s3: bucket: <Unique-bucket-name> region: us-east-1 (Use this region as default) regionEndpoint: https://<Endpoint-name>:<port> virtualHostedStyle: false",
"oc get pods -n openshift-image-registry",
"oc get pods -n openshift-image-registry",
"oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-56d78bc5fb-bxcgv 2/2 Running 0 44d image-pruner-1605830400-29r7k 0/1 Completed 0 10h image-registry-b6c8f4596-ln88h 1/1 Running 0 17d node-ca-2nxvz 1/1 Running 0 44d node-ca-dtwjd 1/1 Running 0 44d node-ca-h92rj 1/1 Running 0 44d node-ca-k9bkd 1/1 Running 0 44d node-ca-stkzc 1/1 Running 0 44d node-ca-xn8h4 1/1 Running 0 44d",
"oc describe pod <image-registry-name>",
"oc describe pod image-registry-b6c8f4596-ln88h Environment: REGISTRY_STORAGE_S3_REGIONENDPOINT: http://s3.openshift-storage.svc REGISTRY_STORAGE: s3 REGISTRY_STORAGE_S3_BUCKET: bucket-registry-mcg REGISTRY_STORAGE_S3_REGION: us-east-1 REGISTRY_STORAGE_S3_ENCRYPT: true REGISTRY_STORAGE_S3_VIRTUALHOSTEDSTYLE: false REGISTRY_STORAGE_S3_USEDUALSTACK: true REGISTRY_STORAGE_S3_ACCESSKEY: <set to the key 'REGISTRY_STORAGE_S3_ACCESSKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_STORAGE_S3_SECRETKEY: <set to the key 'REGISTRY_STORAGE_S3_SECRETKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_HTTP_ADDR: :5000 REGISTRY_HTTP_NET: tcp REGISTRY_HTTP_SECRET: 57b943f691c878e342bac34e657b702bd6ca5488d51f839fecafa918a79a5fc6ed70184cab047601403c1f383e54d458744062dcaaa483816d82408bb56e686f REGISTRY_LOG_LEVEL: info REGISTRY_OPENSHIFT_QUOTA_ENABLED: true REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR: inmemory REGISTRY_STORAGE_DELETE_ENABLED: true REGISTRY_OPENSHIFT_METRICS_ENABLED: true REGISTRY_OPENSHIFT_SERVER_ADDR: image-registry.openshift-image-registry.svc:5000 REGISTRY_HTTP_TLS_CERTIFICATE: /etc/secrets/tls.crt REGISTRY_HTTP_TLS_KEY: /etc/secrets/tls.key",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, for example 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>",
"apiVersion: v1 kind: Namespace metadata: name: <desired_name> labels: storagequota: <desired_label>",
"oc edit storagecluster -n openshift-storage <ocs_storagecluster_name>",
"apiVersion: ocs.openshift.io/v1 kind: StorageCluster spec: [...] overprovisionControl: - capacity: <desired_quota_limit> storageClassName: <storage_class_name> quotaName: <desired_quota_name> selector: labels: matchLabels: storagequota: <desired_label> [...]",
"oc get clusterresourcequota -A oc describe clusterresourcequota -A",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}",
"spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd",
"config.yaml: | openshift-storage: delete: days: 5",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ceph-multus-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.200.0/24\", \"routes\": [ {\"dst\": \"NODE_IP_CIDR\"} ] } }'",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens3\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.2.0/24\" } }'",
"oc get csv USD(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.py",
"oc get cm rook-ceph-external-cluster-script-config -n openshift-storage -o jsonpath='{.data.script}' | base64 --decode > ceph-external-cluster-details-exporter.py",
"python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user= ocs-client-name --rgw-pool-prefix rgw-pool-prefix",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd-block-pool-name --monitoring-endpoint ceph-mgr-prometheus-exporter-endpoint --monitoring-endpoint-port ceph-mgr-prometheus-exporter-port --run-as-user ocs-client-name --rgw-endpoint rgw-endpoint --rgw-pool-prefix rgw-pool-prefix",
"caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}} ]",
"spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"",
"adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule",
"Taints: Key: node.openshift.ocs.io/storage Value: true Effect: Noschedule",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim",
"oc get pvc data-pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO ocs-storagecluster-ceph-rbd 20h",
"oc annotate pvc data-pvc \"reclaimspace.csiaddons.openshift.io/schedule=@monthly\"",
"persistentvolumeclaim/data-pvc annotated",
"oc get reclaimspacecronjobs.csiaddons.openshift.io",
"NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @monthly 3s",
"oc annotate pvc data-pvc \"reclaimspace.csiaddons.openshift.io/schedule=@weekly\" --overwrite=true",
"persistentvolumeclaim/data-pvc annotated",
"oc get reclaimspacecronjobs.csiaddons.openshift.io",
"NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 @weekly 3s",
"oc get reclaimspacecronjobs -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'",
"oc annotate reclaimspacecronjobs <RECLAIMSPACECRONJOB_NAME> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true",
"oc patch reclaimspacecronjobs <RECLAIMSPACECRONJOB_NAME> -p '{\"spec\": {\"suspend\": true}}' --type=merge",
"apiVersion: csiaddons.openshift.io/v1alpha1 kind: ReclaimSpaceJob metadata: name: sample-1 spec: target: persistentVolumeClaim: pvc-1 timeout: 360",
"apiVersion: csiaddons.openshift.io/v1alpha1 kind: ReclaimSpaceCronJob metadata: name: reclaimspacecronjob-sample spec: jobTemplate: spec: target: persistentVolumeClaim: data-pvc timeout: 360 schedule: '@weekly' concurrencyPolicy: Forbid",
"Status: Completion Time: 2023-03-08T18:56:18Z Conditions: Last Transition Time: 2023-03-08T18:56:18Z Message: Failed to make controller request: context deadline exceeded Observed Generation: 1 Reason: failed Status: True Type: Failed Message: Maximum retry limit reached Result: Failed Retries: 6 Start Time: 2023-03-08T18:33:55Z",
"apiVersion: v1 kind: ConfigMap metadata: name: csi-addons-config namespace: openshift-storage data: \"reclaim-space-timeout\": \"6m\"",
"delete po -n openshift-storage -l \"app.kubernetes.io/name=csi-addons\"",
"odf subvolume ls --stale",
"Filesystem Subvolume Subvolumegroup State ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110004 csi stale ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110005 csi stale",
"odf subvolume delete <subvolumes> <filesystem> <subvolumegroup>",
"odf subvolume delete csi-vol-427774b4-340b-11ed-8d66-0242ac110004,csi-vol-427774b4-340b-11ed-8d66-0242ac110005 ocs-storagecluster csi",
"Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted",
"oc edit configmap rook-ceph-operator-config -n openshift-storage",
"oc get configmap rook-ceph-operator-config -n openshift-storage -o yaml",
"apiVersion: v1 data: [...] CSI_PLUGIN_TOLERATIONS: | - key: nodetype operator: Equal value: infra effect: NoSchedule - key: node.ocs.openshift.io/storage operator: Equal value: \"true\" effect: NoSchedule [...] kind: ConfigMap metadata: [...]",
"oc delete -n openshift-storage pod <name of the rook_ceph_operator pod>",
"oc delete -n openshift-storage pod rook-ceph-operator-5446f9b95b-jrn2j pod \"rook-ceph-operator-5446f9b95b-jrn2j\" deleted",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephFilesystems/dataPoolSpec/replicated/size\", \"value\": 2 }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched",
"oc get cephfilesystem ocs-storagecluster-cephfilesystem -o=jsonpath='{.spec.dataPools}' | jq [ { \"application\": \"\", \"deviceClass\": \"ssd\", \"erasureCoded\": { \"codingChunks\": 0, \"dataChunks\": 0 }, \"failureDomain\": \"zone\", \"mirroring\": {}, \"quotas\": {}, \"replicated\": { \"replicasPerFailureDomain\": 1, \"size\": 2, \"targetSizeRatio\": 0.49 }, \"statusCheck\": { \"mirror\": {} } } ]",
"ceph osd pool ls | grep filesystem ocs-storagecluster-cephfilesystem-metadata ocs-storagecluster-cephfilesystem-data0",
"oc --namespace openshift-storage patch storageclusters.ocs.openshift.io ocs-storagecluster --type merge --patch '{\"spec\": {\"nfs\":{\"enable\": true}}}'",
"-n openshift-storage describe cephnfs ocs-storagecluster-cephnfs",
"-n openshift-storage get pod | grep csi-nfsplugin",
"csi-nfsplugin-47qwq 2/2 Running 0 10s csi-nfsplugin-77947 2/2 Running 0 10s csi-nfsplugin-ct2pm 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-2rm2w 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-8nj5h 2/2 Running 0 10s",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <desired_name> spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-nfs",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: <pvc_name> readOnly: false",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: <volume_name> mountPath: /var/lib/www/html",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: my-nfs-export",
"oc get pods -n openshift-storage | grep rook-ceph-nfs",
"oc describe pod <name of the rook-ceph-nfs pod> | grep ceph_nfs",
"oc describe pod rook-ceph-nfs-ocs-storagecluster-cephnfs-a-7bb484b4bf-bbdhs | grep ceph_nfs ceph_nfs=my-nfs",
"apiVersion: v1 kind: Service metadata: name: rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer namespace: openshift-storage spec: ports: - name: nfs port: 2049 type: LoadBalancer externalTrafficPolicy: Local selector: app: rook-ceph-nfs ceph_nfs: <my-nfs> instance: a",
"oc get pvc <pvc_name> --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d",
"get pvc pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d",
"oc get pv pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.csi.volumeAttributes.share}' /0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215",
"oc -n openshift-storage get service rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer --output jsonpath='{.status.loadBalancer.ingress}' [{\"hostname\":\"ingress-id.somedomain.com\"}]",
"mount -t nfs4 -o proto=tcp ingress-id.somedomain.com:/0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215 /export/mount/path",
"odf get recovery-profile",
"odf set recovery-profile <option>",
"odf get recovery-profile",
"odf set full 0.9",
"odf set full 0.92",
"odf set full 0.85",
"odf set backfillfull 0.85",
"odf set nearfull 0.8",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/fullRatio\", \"value\": 0.90 }]'",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/backfillFullRatio\", \"value\": 0.85 }]'",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/nearFullRatio\", \"value\": 0.8 }]'"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/managing_and_allocating_storage_resources/managing-persistent-volume-claims_rhodf |
Chapter 109. REST | Chapter 109. REST Both producer and consumer are supported The REST component allows to define REST endpoints (consumer) using the Rest DSL and plugin to other Camel components as the REST transport. The rest component can also be used as a client (producer) to call REST services. 109.1. Dependencies When using rest with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-rest-starter</artifactId> </dependency> 109.2. URI format 109.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 109.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 109.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 109.4. Component Options The REST component supports 8 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumerComponentName (consumer) The Camel Rest component to use for (consumer) the REST transport, such as jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String apiDoc (producer) The swagger api doc resource to use. The resource is loaded from classpath by default and must be in JSON format. String componentName (producer) Deprecated The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. String host (producer) Host and port of HTTP service to use (override host in swagger schema). String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean producerComponentName (producer) The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 109.5. Endpoint Options The REST endpoint is configured using URI syntax: with the following path and query parameters: 109.5.1. Path Parameters (3 parameters) Name Description Default Type method (common) Required HTTP method to use. Enum values: get post put delete patch head trace connect options String path (common) Required The base path. String uriTemplate (common) The uri template. String 109.5.2. Query Parameters (16 parameters) Name Description Default Type consumes (common) Media type such as: 'text/xml', or 'application/json' this REST service accepts. By default we accept all kinds of types. String inType (common) To declare the incoming POJO binding type as a FQN class name. String outType (common) To declare the outgoing POJO binding type as a FQN class name. String produces (common) Media type such as: 'text/xml', or 'application/json' this REST service returns. String routeId (common) Name of the route this REST services creates. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumerComponentName (consumer) The Camel Rest component to use for (consumer) the REST transport, such as jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String description (consumer) Human description to document this REST service. String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern apiDoc (producer) The openapi api doc resource to use. The resource is loaded from classpath by default and must be in JSON format. String bindingMode (producer) Configures the binding mode for the producer. If set to anything other than 'off' the producer will try to convert the body of the incoming message from inType to the json or xml, and the response from json or xml to outType. Enum values: auto off json xml json_xml RestBindingMode host (producer) Host and port of HTTP service to use (override host in openapi schema). String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean producerComponentName (producer) The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. String queryParameters (producer) Query parameters for the HTTP service to call. The query parameters can contain multiple parameters separated by ampersand such such as foo=123&bar=456. String 109.6. Supported rest components The following components support rest consumer (Rest DSL): camel-servlet camel-platform-http The following components support rest producer: camel-http 109.7. Path and uriTemplate syntax The path and uriTemplate option is defined using a REST syntax where you define the REST context path using support for parameters. Note If no uriTemplate is configured then path option works the same way. It does not matter if you configure only path or if you configure both options. Though configuring both a path and uriTemplate is a more common practice with REST. The following is a Camel route using a path only from("rest:get:hello") .transform().constant("Bye World"); And the following route uses a parameter which is mapped to a Camel header with the key "me". from("rest:get:hello/{me}") .transform().simple("Bye USD{header.me}"); The following examples have configured a base path as "hello" and then have two REST services configured using uriTemplates. from("rest:get:hello:/{me}") .transform().simple("Hi USD{header.me}"); from("rest:get:hello:/french/{me}") .transform().simple("Bonjour USD{header.me}"); Note The Rest endpoint path does not accept escaped characters, for example, the plus sign. This is default behavior of Apache Camel 3. 109.8. Rest producer examples You can use the rest component to call REST services like any other Camel component. For example to call a REST service on using hello/{me} you can do from("direct:start") .to("rest:get:hello/{me}"); And then the dynamic value {me} is mapped to Camel message with the same name. So to call this REST service you can send an empty message body and a header as shown: template.sendBodyAndHeader("direct:start", null, "me", "Donald Duck"); The Rest producer needs to know the hostname and port of the REST service, which you can configure using the host option as shown: from("direct:start") .to("rest:get:hello/{me}?host=myserver:8080/foo"); Instead of using the host option, you can configure the host on the restConfiguration as shown: restConfiguration().host("myserver:8080/foo"); from("direct:start") .to("rest:get:hello/{me}"); You can use the producerComponent to select which Camel component to use as the HTTP client, for example to use http you can do: restConfiguration().host("myserver:8080/foo").producerComponent("http"); from("direct:start") .to("rest:get:hello/{me}"); 109.9. Rest producer binding The REST producer supports binding using JSon or XML like the rest-dsl does. For example to use jetty with json binding mode turned on you can configure this in the rest configuration: restConfiguration().component("jetty").host("localhost").port(8080).bindingMode(RestBindingMode.json); from("direct:start") .to("rest:post:user"); Then when calling the REST service using rest producer it will automatic bind any POJOs to json before calling the REST service: UserPojo user = new UserPojo(); user.setId(123); user.setName("Donald Duck"); template.sendBody("direct:start", user); In the example above we send a POJO instance UserPojo as the message body. And because we have turned on JSon binding in the rest configuration, then the POJO will be marshalled from POJO to JSon before calling the REST service. However if you want to also perform binding for the response message (eg what the REST service send back as response) you would need to configure the outType option to specify what is the classname of the POJO to unmarshal from JSon to POJO. For example if the REST service returns a JSon payload that binds to com.foo.MyResponsePojo you can configure this as shown: restConfiguration().component("jetty").host("localhost").port(8080).bindingMode(RestBindingMode.json); from("direct:start") .to("rest:post:user?outType=com.foo.MyResponsePojo"); Note You must configure outType option if you want POJO binding to happen for the response messages received from calling the REST service. 109.10. More examples See Rest DSL which offers more examples and how you can use the Rest DSL to define those in a nicer RESTful way. There is a camel-example-servlet-rest-tomcat example in the Apache Camel distribution, that demonstrates how to use the Rest DSL with SERVLET as transport that can be deployed on Apache Tomcat, or similar web containers. 109.11. Spring Boot Auto-Configuration The component supports 12 options, which are listed below. Name Description Default Type camel.component.rest-api.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.rest-api.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.rest-api.enabled Whether to enable auto configuration of the rest-api component. This is enabled by default. Boolean camel.component.rest.api-doc The swagger api doc resource to use. The resource is loaded from classpath by default and must be in JSON format. String camel.component.rest.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.rest.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.rest.consumer-component-name The Camel Rest component to use for (consumer) the REST transport, such as jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.component.rest.enabled Whether to enable auto configuration of the rest component. This is enabled by default. Boolean camel.component.rest.host Host and port of HTTP service to use (override host in swagger schema). String camel.component.rest.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.rest.producer-component-name The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. String camel.component.rest.component-name Deprecated The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-rest-starter</artifactId> </dependency>",
"rest://method:path[:uriTemplate]?[options]",
"rest:method:path:uriTemplate",
"from(\"rest:get:hello\") .transform().constant(\"Bye World\");",
"from(\"rest:get:hello/{me}\") .transform().simple(\"Bye USD{header.me}\");",
"from(\"rest:get:hello:/{me}\") .transform().simple(\"Hi USD{header.me}\"); from(\"rest:get:hello:/french/{me}\") .transform().simple(\"Bonjour USD{header.me}\");",
"from(\"direct:start\") .to(\"rest:get:hello/{me}\");",
"template.sendBodyAndHeader(\"direct:start\", null, \"me\", \"Donald Duck\");",
"from(\"direct:start\") .to(\"rest:get:hello/{me}?host=myserver:8080/foo\");",
"restConfiguration().host(\"myserver:8080/foo\"); from(\"direct:start\") .to(\"rest:get:hello/{me}\");",
"restConfiguration().host(\"myserver:8080/foo\").producerComponent(\"http\"); from(\"direct:start\") .to(\"rest:get:hello/{me}\");",
"restConfiguration().component(\"jetty\").host(\"localhost\").port(8080).bindingMode(RestBindingMode.json); from(\"direct:start\") .to(\"rest:post:user\");",
"UserPojo user = new UserPojo(); user.setId(123); user.setName(\"Donald Duck\"); template.sendBody(\"direct:start\", user);",
"restConfiguration().component(\"jetty\").host(\"localhost\").port(8080).bindingMode(RestBindingMode.json); from(\"direct:start\") .to(\"rest:post:user?outType=com.foo.MyResponsePojo\");"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-rest-component-starter |
Logging | Logging OpenShift Dedicated 4 Logging installation, usage, and release notes for OpenShift Dedicated Red Hat OpenShift Documentation Team | [
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"oc project openshift-logging",
"oc get clusterlogging instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: collector-2rhqp: ip-10-0-169-13.ec2.internal collector-6fgjh: ip-10-0-165-244.ec2.internal collector-6l2ff: ip-10-0-128-218.ec2.internal collector-54nx5: ip-10-0-139-30.ec2.internal collector-flpnn: ip-10-0-147-228.ec2.internal collector-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - collector-2rhqp - collector-54nx5 - collector-6fgjh - collector-6l2ff - collector-flpnn - collector-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}",
"Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:",
"Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable",
"Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:",
"oc project openshift-logging",
"oc describe deployment cluster-logging-operator",
"Name: cluster-logging-operator . Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----",
"oc get replicaset",
"NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m",
"oc describe replicaset cluster-logging-operator-574b8987df",
"Name: cluster-logging-operator-574b8987df . Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----",
"oc delete pod --selector logging-infra=collector",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/nodes?v",
"oc -n openshift-logging get pods -l component=elasticsearch",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/master?v",
"oc logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging",
"oc logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/recovery?active_only=true",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health | grep number_of_pending_tasks",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/settings?pretty",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/settings?pretty -X PUT -d '{\"persistent\": {\"cluster.routing.allocation.enable\":\"all\"}}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/indices?v",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.allocation.max_retries\":10}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_search/scroll/_all -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.unassigned.node_left.delayed_timeout\":\"10m\"}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/indices?v",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_red_index_name> -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_nodes/stats?pretty",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc -n openshift-logging get po -o wide",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/health?pretty | grep unassigned_shards",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc -n openshift-logging get po -o wide",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/health?pretty | grep relocating_shards",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_all/_settings?pretty -X PUT -d '{\"index.blocks.read_only_allow_delete\": null}'",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc project openshift-logging",
"oc get Elasticsearch",
"NAME AGE elasticsearch 5h9m",
"oc get Elasticsearch <Elasticsearch-instance> -o yaml",
"oc get Elasticsearch elasticsearch -n openshift-logging -o yaml",
"status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: \"\" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: \"True\" type: Unschedulable",
"status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable",
"status: clusterHealth: \"\" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: \"True\" type: InvalidRedundancy",
"status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters",
"status: clusterHealth: green conditions: - lastTransitionTime: \"2021-05-07T01:05:13Z\" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices",
"Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw",
". Status: Running . Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 . Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True . Events: <none>",
"oc get deployment --selector component=elasticsearch -o name",
"deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3",
"oc describe deployment elasticsearch-cdm-1gon-1",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable . Events: <none>",
"oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d",
"oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Events: <none>",
"eo_elasticsearch_cr_cluster_management_state{state=\"managed\"} 1 eo_elasticsearch_cr_cluster_management_state{state=\"unmanaged\"} 0",
"eo_elasticsearch_cr_restart_total{reason=\"cert_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"rolling_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"scheduled_restart\"} 3",
"Total number of Namespaces. es_index_namespaces_total 5",
"es_index_document_count{namespace=\"namespace_1\"} 25 es_index_document_count{namespace=\"namespace_2\"} 10 es_index_document_count{namespace=\"namespace_3\"} 5",
"message\": \"Secret \\\"elasticsearch\\\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]\", \"reason\": \"Missing Required Secrets\",",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {}",
"oc get deployment",
"cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cd-tuhduuw-1-f5c885dbf-dlqws 1/1 Running 0 2m4s elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s",
"cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\"",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: <channel> 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator",
"oc apply -f <filename>.yaml",
"oc get csv --all-namespaces",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-node-lease elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-public elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-system elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-apiserver elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-authentication-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-authentication elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-controller-manager-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-controller-manager elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-credential-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {}",
"oc get deployment",
"cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-logging: \"true\" openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: ocpConsole: logsLimit: 15 managementState: Managed",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed",
"oc -n openshift-logging delete subscription <subscription>",
"oc -n openshift-logging delete operatorgroup <operator_group_name>",
"oc delete clusterserviceversion cluster-logging.<version>",
"oc get operatorgroup <operator_group_name> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - \"\"",
"oc get pod -n openshift-logging --selector component=elasticsearch",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m",
"oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }",
"oc project openshift-logging",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s",
"oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices",
"Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0",
"oc get kibana kibana -o json",
"[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: visualization: type: <visualizer_type> 1 kibana: 2 resources: {} nodeSelector: {} proxy: {} replicas: {} tolerations: {} ocpConsole: 3 logsLimit: {} timeout: {}",
"oc apply -f <filename>.yaml",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"oc get consoles.operator.openshift.io cluster -o yaml |grep logging-view-plugin || oc patch consoles.operator.openshift.io cluster --type=merge --patch '{ \"spec\": { \"plugins\": [\"logging-view-plugin\"]}}'",
"oc patch clusterlogging instance --type=merge --patch '{ \"metadata\": { \"annotations\": { \"logging.openshift.io/ocp-console-migration-target\": \"lokistack-dev\" }}}' -n openshift-logging",
"clusterlogging.logging.openshift.io/instance patched",
"oc get clusterlogging instance -o=jsonpath='{.metadata.annotations.logging\\.openshift\\.io/ocp-console-migration-target}' -n openshift-logging",
"\"lokistack-dev\"",
"oc auth can-i get pods --subresource log -n <project>",
"yes",
"oc auth can-i get pods --subresource log -n <project>",
"yes",
"{ \"_index\": \"infra-000001\", \"_type\": \"_doc\", \"_id\": \"YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3\", \"_version\": 1, \"_score\": null, \"_source\": { \"docker\": { \"container_id\": \"f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1\" }, \"kubernetes\": { \"container_name\": \"registry-server\", \"namespace_name\": \"openshift-marketplace\", \"pod_name\": \"redhat-marketplace-n64gc\", \"container_image\": \"registry.redhat.io/redhat/redhat-marketplace-index:v4.7\", \"container_image_id\": \"registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f\", \"pod_id\": \"8f594ea2-c866-4b5c-a1c8-a50756704b2a\", \"host\": \"ip-10-0-182-28.us-east-2.compute.internal\", \"master_url\": \"https://kubernetes.default.svc\", \"namespace_id\": \"3abab127-7669-4eb3-b9ef-44c04ad68d38\", \"namespace_labels\": { \"openshift_io/cluster-monitoring\": \"true\" }, \"flat_labels\": [ \"catalogsource_operators_coreos_com/update=redhat-marketplace\" ] }, \"message\": \"time=\\\"2020-09-23T20:47:03Z\\\" level=info msg=\\\"serving registry\\\" database=/database/index.db port=50051\", \"level\": \"unknown\", \"hostname\": \"ip-10-0-182-28.internal\", \"pipeline_metadata\": { \"collector\": { \"ipaddr4\": \"10.0.182.28\", \"inputname\": \"fluent-plugin-systemd\", \"name\": \"fluentd\", \"received_at\": \"2020-09-23T20:47:15.007583+00:00\", \"version\": \"1.7.4 1.6.0\" } }, \"@timestamp\": \"2020-09-23T20:47:03.422465+00:00\", \"viaq_msg_id\": \"YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3\", \"openshift\": { \"labels\": { \"logging\": \"infra\" } } }, \"fields\": { \"@timestamp\": [ \"2020-09-23T20:47:03.422Z\" ], \"pipeline_metadata.collector.received_at\": [ \"2020-09-23T20:47:15.007Z\" ] }, \"sort\": [ 1600894023422 ] }",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd",
"oc edit ClusterLogging instance",
"oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: visualization: type: \"kibana\" kibana: replicas: 1 1",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: logs: type: vector vector: {}",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"}",
"pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json",
"{\"structured\": { \"level\": \"info\", \"name\": \"fred\", \"home\": \"bedrock\" }, \"more fields...\"}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: - application outputRefs: - default parse: json 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"apache\", ...}} }",
"{ \"structured\":{\"name\":\"wilma\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"google\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"openshift\":{\"labels\":{\"myLabel\": \"myValue\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json",
"oc create -f <filename>.yaml",
"oc delete pod --selector logging-infra=collector",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat enableStructuredContainerLogs: true 2 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: json",
"apiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy 1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy 2 image: heavyimage - name: low image: lowimage",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-secure 4 type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 5 type: \"elasticsearch\" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 6 type: \"kafka\" url: tls://kafka.secure.com:9093/app-topic inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 8 inputRefs: - audit outputRefs: - elasticsearch-secure - default labels: secure: \"true\" 9 datacenter: \"east\" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: \"west\" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: \"south\"",
"oc create secret generic -n <namespace> <secret_name> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 pipelines: - inputRefs: - <log_type> 4 outputRefs: - <output_name> 5 outputs: - name: <output_name> 6 type: <output_type> 7 url: <log_output_url> 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: tuning: delivery: AtLeastOnce 1 compression: none 2 maxWrite: <integer> 3 minRetryDuration: 1s 4 maxRetryDuration: 1s 5",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-app-logs inputRefs: - application outputRefs: - default detectMultilineErrors: true",
"[transforms.detect_exceptions_app-logs] type = \"detect_exceptions\" inputs = [\"application\"] languages = [\"All\"] group_by = [\"kubernetes.namespace_name\",\"kubernetes.pod_name\",\"kubernetes.container_name\"] expire_after_ms = 2000 multiline_flush_interval_ms = 1000",
"<label @MULTILINE_APP_LOGS> <match kubernetes.**> @type detect_exceptions remove_tag_prefix 'kubernetes' message message force_line_breaks true multiline_flush_interval .2 </match> </label>",
"oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json= <your_service_account_key_file.json>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: gcp-1 type: googleCloudLogging secret: name: gcp-secret googleCloudLogging: projectId : \"openshift-gce-devel\" 4 logId : \"app-gcp\" 5 pipelines: - name: test-app inputRefs: 6 - application outputRefs: - gcp-1",
"oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: splunk-receiver 4 secret: name: vector-splunk-secret 5 type: splunk 6 url: <http://your.splunk.hec.url:8088> 7 pipelines: 8 - inputRefs: - application - infrastructure name: 9 outputRefs: - splunk-receiver 10",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: httpout-app type: http url: 4 http: headers: 5 h1: v1 h2: v2 method: POST secret: name: 6 tls: insecureSkipVerify: 7 pipelines: - name: inputRefs: - application outputRefs: - 8",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: openshift-logging type: Opaque data: shared_key: <your_shared_key> 1",
"Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName \"<resource_name>\" -Name \"<workspace_name>\"",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id 1 logType: my_log_type 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor-app type: azureMonitor azureMonitor: customerId: my-customer-id logType: application_log 1 secret: name: my-secret - name: azure-monitor-infra type: azureMonitor azureMonitor: customerId: my-customer-id logType: infra_log # secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor-app - name: infra-pipeline inputRefs: - infrastructure outputRefs: - azure-monitor-infra",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id logType: my_log_type azureResourceId: \"/subscriptions/111111111\" 1 host: \"ods.opinsights.azure.com\" 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project 8 pipelines: - name: forward-to-fluentd-insecure 9 inputRefs: 10 - my-app-logs outputRefs: 11 - fluentd-server-insecure labels: project: \"my-project\" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application 14 - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: \"C1234\"",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 inputs: 5 - name: myAppLogData application: selector: matchLabels: 6 environment: production app: nginx namespaces: 7 - app1 - app2 outputs: 8 - <output_name>",
"- inputRefs: [ myAppLogData, myOtherAppLogData ]",
"oc create -f <file-name>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 outputRefs: default filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: loki-insecure 4 type: \"loki\" 5 url: http://loki.insecure.com:3100 6 loki: tenantKey: kubernetes.namespace_name labelKeys: - kubernetes.labels.foo - name: loki-secure 7 type: \"loki\" url: https://loki.secure.com:3100 secret: name: loki-secret 8 loki: tenantKey: kubernetes.namespace_name 9 labelKeys: - kubernetes.labels.foo 10 pipelines: - name: application-logs 11 inputRefs: 12 - application - audit outputRefs: 13 - loki-secure",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-example 4 type: elasticsearch 5 elasticsearch: version: 8 6 url: http://elasticsearch.example.com:9200 7 secret: name: es-secret 8 pipelines: - name: application-logs 9 inputRefs: 10 - application - audit outputRefs: - elasticsearch-example 11 - default 12 labels: myLabel: \"myValue\" 13",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password>",
"oc create secret -n openshift-logging openshift-test-secret.yaml",
"kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 labels: clusterId: \"C1234\" 11 - name: forward-to-fluentd-insecure 12 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: \"C1234\"",
"oc create -f <file-name>.yaml",
"input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } }",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: rsyslog-east 4 type: syslog 5 syslog: 6 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 7 secret: 8 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'tcp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 9 inputRefs: 10 - audit - application outputRefs: 11 - rsyslog-east - default 12 labels: secure: \"true\" 13 syslog: \"east\" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: \"west\"",
"oc create -f <filename>.yaml",
"spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout",
"<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {\"msgcontent\"=>\"Message Contents\", \"timestamp\"=>\"2020-11-15 17:06:09\", \"tag_key\"=>\"rec_tag\", \"index\"=>56}",
"<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={\"msgcontent\":\"My life is my message\", \"timestamp\":\"2020-11-16 10:49:36\", \"tag_key\":\"rec_tag\", \"index\":76}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: app-logs 4 type: kafka 5 url: tls://kafka.example.devlab.com:9093/app-topic 6 secret: name: kafka-secret 7 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic 8 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic 9 inputRefs: 10 - application outputRefs: 11 - app-logs labels: logType: \"application\" 12 - name: infra-topic 13 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: \"infra\" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs labels: logType: \"audit\"",
"spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka: 1 brokers: 2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic 3",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=",
"oc apply -f cw-secret.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: cw-secret 9 pipelines: - name: infra-logs 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12",
"oc create -f <file-name>.yaml",
"oc get Infrastructure/cluster -ojson | jq .status.infrastructureName \"mycluster-7977k\"",
"oc run busybox --image=busybox -- sh -c 'while true; do echo \"My life is my message\"; sleep 3; done' oc logs -f busybox My life is my message My life is my message My life is my message",
"oc get ns/app -ojson | jq .metadata.uid \"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\"",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.application\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName \"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log\"",
"aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { \"events\": [ { \"timestamp\": 1629422704178, \"message\": \"{\\\"docker\\\":{\\\"container_id\\\":\\\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\\\"},\\\"kubernetes\\\":{\\\"container_name\\\":\\\"busybox\\\",\\\"namespace_name\\\":\\\"app\\\",\\\"pod_name\\\":\\\"busybox\\\",\\\"container_image\\\":\\\"docker.io/library/busybox:latest\\\",\\\"container_image_id\\\":\\\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\\\",\\\"pod_id\\\":\\\"870be234-90a3-4258-b73f-4f4d6e2777c7\\\",\\\"host\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"labels\\\":{\\\"run\\\":\\\"busybox\\\"},\\\"master_url\\\":\\\"https://kubernetes.default.svc\\\",\\\"namespace_id\\\":\\\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\\\",\\\"namespace_labels\\\":{\\\"kubernetes_io/metadata_name\\\":\\\"app\\\"}},\\\"message\\\":\\\"My life is my message\\\",\\\"level\\\":\\\"unknown\\\",\\\"hostname\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"pipeline_metadata\\\":{\\\"collector\\\":{\\\"ipaddr4\\\":\\\"10.0.216.3\\\",\\\"inputname\\\":\\\"fluent-plugin-systemd\\\",\\\"name\\\":\\\"fluentd\\\",\\\"received_at\\\":\\\"2021-08-20T01:25:08.085760+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-20T01:25:04.178986+00:00\\\",\\\"viaq_index_name\\\":\\\"app-write\\\",\\\"viaq_msg_id\\\":\\\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\\\",\\\"log_type\\\":\\\"application\\\",\\\"time\\\":\\\"2021-08-20T01:25:04+00:00\\\"}\", \"ingestionTime\": 1629422744016 },",
"cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"demo-group-prefix.application\" \"demo-group-prefix.audit\" \"demo-group-prefix.infrastructure\"",
"cloudwatch: groupBy: namespaceName region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.app\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"cloudwatch: groupBy: namespaceUUID region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf\" // uid of the \"app\" namespace \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions",
"apiVersion: v1 kind: Secret metadata: namespace: openshift-logging name: my-secret-name stringData: role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <your_role_name>-credrequest namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - logs:PutLogEvents - logs:CreateLogGroup - logs:PutRetentionPolicy - logs:CreateLogStream - logs:DescribeLogGroups - logs:DescribeLogStreams effect: Allow resource: arn:aws:logs:*:*:* secretRef: name: <your_role_name> namespace: openshift-logging serviceAccountNames: - logcollector",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com 1",
"oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: clf-collector 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: <your_secret_name> 9 pipelines: - name: to-cloudwatch 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: <log_collector_type> 1 resources: {} tolerations: {}",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1alpha1 kind: LogFileMetricExporter metadata: name: instance namespace: openshift-logging spec: nodeSelector: {} 1 resources: 2 limits: cpu: 500m memory: 256Mi requests: cpu: 200m memory: 128Mi tolerations: [] 3",
"oc apply -f <filename>.yaml",
"oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging",
"NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: fluentd resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi",
"apiVersion: logging.openshift.io/v1beta1 kind: ClusterLogForwarder metadata: spec: serviceAccountName: <service_account_name> inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - name: http-pipeline inputRefs: - http-receiver",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - inputRefs: - http-receiver name: http-pipeline",
"oc apply -f <filename>.yaml",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9",
"oc get pods -l component=collector -n openshift-logging",
"oc extract configmap/collector-config --confirm",
"<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size \"#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}\" chunk_limit_size 8m overflow_action throw_exception disable_chunk_backup true </buffer>",
"apiVersion: template.openshift.io/v1 kind: Template metadata: name: eventrouter-template annotations: description: \"A pod forwarding kubernetes events to OpenShift Logging stack.\" tags: \"events,EFK,logging,cluster-logging\" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [\"\"] resources: [\"events\"] verbs: [\"get\", \"watch\", \"list\"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { \"sink\": \"stdout\" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" spec: selector: matchLabels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" replicas: 1 template: metadata: labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: \"registry.redhat.io/openshift-logging/eventrouter-rhel9:v0.4\" - name: CPU 7 displayName: CPU value: \"100m\" - name: MEMORY 8 displayName: Memory value: \"128Mi\" - name: NAMESPACE displayName: Namespace value: \"openshift-logging\" 9",
"oc process -f <templatefile> | oc apply -n openshift-logging -f -",
"oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -",
"serviceaccount/eventrouter created clusterrole.rbac.authorization.k8s.io/event-reader created clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created",
"oc get pods --selector component=eventrouter -o name -n openshift-logging",
"pod/cluster-logging-eventrouter-d649f97c8-qvv8r",
"oc logs <cluster_logging_eventrouter_pod> -n openshift-logging",
"oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging",
"{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"namespace\":\"openshift-service-catalog-removed\",\"selfLink\":\"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"uid\":\"787d7b26-3d2f-4017-b0b0-420db4ae62c0\",\"resourceVersion\":\"21399\",\"creationTimestamp\":\"2020-09-08T15:40:26Z\"},\"involvedObject\":{\"kind\":\"Job\",\"namespace\":\"openshift-service-catalog-removed\",\"name\":\"openshift-service-catalog-controller-manager-remover\",\"uid\":\"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f\",\"apiVersion\":\"batch/v1\",\"resourceVersion\":\"21280\"},\"reason\":\"Completed\",\"message\":\"Job completed\",\"source\":{\"component\":\"job-controller\"},\"firstTimestamp\":\"2020-09-08T15:40:26Z\",\"lastTimestamp\":\"2020-09-08T15:40:26Z\",\"count\":1,\"type\":\"Normal\"}}",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-5.9\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-5.9\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-logging: \"true\" openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: ocpConsole: logsLimit: 15 managementState: Managed",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"oc create secret generic -n openshift-logging <your_secret_name> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"oc get secrets",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s",
"oc create secret generic logging-loki-aws --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\"",
"oc -n openshift-logging create secret generic \"logging-loki-aws\" --from-literal=bucketnames=\"<s3_bucket_name>\" --from-literal=region=\"<bucket_region>\" --from-literal=audience=\"<oidc_audience>\" 1",
"oc create secret generic logging-loki-azure --from-literal=container=\"<azure_container_name>\" --from-literal=environment=\"<azure_environment>\" \\ 1 --from-literal=account_name=\"<azure_account_name>\" --from-literal=account_key=\"<azure_account_key>\"",
"oc -n openshift-logging create secret generic logging-loki-azure --from-literal=environment=\"<azure_environment>\" --from-literal=account_name=\"<storage_account_name>\" --from-literal=container=\"<container_name>\"",
"oc create secret generic logging-loki-gcs --from-literal=bucketname=\"<bucket_name>\" --from-file=key.json=\"<path/to/key.json>\"",
"oc create secret generic logging-loki-minio --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<minio_bucket_endpoint>\" --from-literal=access_key_id=\"<minio_access_key_id>\" --from-literal=access_key_secret=\"<minio_access_key_secret>\"",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io",
"BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')",
"ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)",
"oc create -n openshift-logging secret generic logging-loki-odf --from-literal=access_key_id=\"<access_key_id>\" --from-literal=access_key_secret=\"<secret_access_key>\" --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"https://<bucket_host>:<bucket_port>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\" --from-literal=project_id=\"<swift_project_id>\" --from-literal=project_name=\"<swift_project_name>\" --from-literal=project_domain_id=\"<swift_project_domain_id>\" --from-literal=project_domain_name=\"<swift_project_domain_name>\" --from-literal=region=\"<swift_region>\"",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator",
"oc apply -f <filename>.yaml",
"oc get csv -n --all-namespaces",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"oc adm groups new cluster-admin",
"oc adm groups add-users cluster-admin <username>",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admin",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"delete pvc __<pvc_name>__ -n openshift-logging",
"delete pod __<pod_name>__ -n openshift-logging",
"patch pvc __<pvc_name>__ -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: logging-all-application-logs-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view 1 subjects: 2 - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-read-logs namespace: log-test-0 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: testuser-0",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: tenants: mode: openshift-logging 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\",\"type\": \"memberlist\"}}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3",
"apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi",
"resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"",
"oc edit clusterlogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"oc project openshift-logging",
"oc get pods -l component=elasticsearch",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":",
"oc rollout resume deployment/<deployment-name>",
"oc rollout resume deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 resumed",
"oc get pods -l component=elasticsearch-",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h",
"oc rollout pause deployment/<deployment-name>",
"oc rollout pause deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 paused",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'",
"oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging",
"172.30.183.229",
"oc get service elasticsearch -n openshift-logging",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h",
"oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108",
"oc project openshift-logging",
"oc extract secret/elasticsearch --to=. --keys=admin-ca",
"admin-ca",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1",
"cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml",
"oc create -f <file-name>.yaml",
"route.route.openshift.io/elasticsearch created",
"token=USD(oc whoami -t)",
"routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`",
"curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"",
"{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }",
"outputRefs: - default",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: type: \"fluentd\" fluentd: {}",
"oc get pods -l component=collector -n openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: <name> namespace: <namespace> spec: rules: enabled: true 1 selector: matchLabels: openshift.io/<label_name>: \"true\" 2 namespaceSelector: matchLabels: openshift.io/<label_name>: \"true\" 3",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputs: - name: kafka-example 1 type: kafka 2 limit: maxRecordsPerSecond: 1000000 3",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: selector: matchLabels: { example: label } 2 containerLimit: maxRecordsPerSecond: 0 3",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: namespaces: [ example-ns-1, example-ns-2 ] 2 containerLimit: maxRecordsPerSecond: 10 3 - name: <input_name> application: namespaces: [ test ] containerLimit: maxRecordsPerSecond: 1000",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: important type: drop drop: test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: important type: drop drop: test: - field: .kubernetes.namespace_name matches: \"^open\" test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4",
"oc apply -f <filename>.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1",
"oc apply -f <filename>.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs1 infrastructure: sources: 1 - node - name: mylogs2 audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: <name> 1 namespace: <namespace> 2 spec: managementState: \"Managed\" collection: type: \"vector\" tolerations: - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi nodeSelector: collector: needed",
"oc apply -f <filename>.yaml",
"oc get pods --selector component=collector -o wide -n <project_name>",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: v1 kind: Pod metadata: name: collector-example namespace: openshift-logging spec: collection: type: vector tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 collector=node:NoExecute",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: vector tolerations: - key: collector 1 operator: Exists 2 effect: NoExecute 3 tolerationSeconds: 6000 4 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: <name> 1 namespace: <namespace> 2 spec: managementState: \"Managed\" collection: type: \"vector\" tolerations: - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi nodeSelector: collector: needed",
"oc apply -f <filename>.yaml",
"oc get pods --selector component=collector -o wide -n <project_name>",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV",
"currentCSV: serverless-operator.v1.28.0",
"oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless",
"subscription.operators.coreos.com \"serverless-operator\" deleted",
"oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless",
"clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted"
]
| https://docs.redhat.com/en/documentation/openshift_dedicated/4/html-single/logging/index |
Chapter 24. Tags | Chapter 24. Tags 24.1. Tag Elements The tags collection provides information about tags in a Red Hat Virtualization environment. An API user accesses this information through the rel="tags" link obtained from the entry point URI. The following table shows specific elements contained in a tag resource representation. Table 24.1. Tag elements Element Type Description Properties host GUID A reference to the host which the tag is attached. user GUID A reference to the user which the tag is attached. vm GUID A reference to the VM which the tag is attached. parent complex A reference to the VM which the tag is attached. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/chap-tags |
Chapter 3. Deploying a Red Hat Enterprise Linux image as a virtual machine on Microsoft Azure | Chapter 3. Deploying a Red Hat Enterprise Linux image as a virtual machine on Microsoft Azure To deploy a Red Hat Enterprise Linux 9 (RHEL 9) image on Microsoft Azure, follow the information below. This chapter: Discusses your options for choosing an image Lists or refers to system requirements for your host system and virtual machine (VM) Provides procedures for creating a custom VM from an ISO image, uploading it to Azure, and launching an Azure VM instance Important You can create a custom VM from an ISO image, but Red Hat recommends that you use the Red Hat Image Builder product to create customized images for use on specific cloud providers. With Image Builder, you can create and upload an Azure Disk Image (VHD format). See Composing a Customized RHEL System Image for more information. For a list of Red Hat products that you can use securely on Azure, refer to Red Hat on Microsoft Azure . Prerequisites Sign up for a Red Hat Customer Portal account. Sign up for a Microsoft Azure account. 3.1. Red Hat Enterprise Linux image options on Azure The following table lists image choices for RHEL 9 on Microsoft Azure, and notes the differences in the image options. Table 3.1. Image options Image option Subscriptions Sample scenario Considerations Deploy a Red Hat Gold Image. Use your existing Red Hat subscriptions. Select a Red Hat Gold Image on Azure. For details on Gold Images and how to access them on Azure, see the Red Hat Cloud Access Reference Guide . The subscription includes the Red Hat product cost; you pay Microsoft for all other instance costs. Deploy a custom image that you move to Azure. Use your existing Red Hat subscriptions. Upload your custom image and attach your subscriptions. The subscription includes the Red Hat product cost; you pay Microsoft for all other instance costs. Deploy an existing Azure image that includes RHEL. The Azure images include a Red Hat product. Choose a RHEL image when you create a VM by using the Azure console, or choose a VM from the Azure Marketplace . You pay Microsoft hourly on a pay-as-you-go model. Such images are called "on-demand." Azure provides support for on-demand images through a support agreement. Red Hat provides updates to the images. Azure makes the updates available through the Red Hat Update Infrastructure (RHUI). Additional resources Using Red Hat Gold Images on Microsoft Azure Azure Marketplace Billing options in the Azure Marketplace Red Hat Enterprise Linux Bring-Your-Own-Subscription Gold Images in Azure Red Hat Cloud Access Reference Guide 3.2. Understanding base images This section includes information about using preconfigured base images and their configuration settings. 3.2.1. Using a custom base image To manually configure a virtual machine (VM), first create a base (starter) VM image. Then, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image. To prepare a cloud image of RHEL, follow the instructions in the sections below. To prepare a Hyper-V cloud image of RHEL, see the Prepare a Red Hat-based virtual machine from Hyper-V Manager . 3.2.2. Required system packages To create and configure a base image of RHEL, your host system must have the following packages installed. Table 3.2. System packages Package Repository Description libvirt rhel-9-for-x86_64-appstream-rpms Open source API, daemon, and management tool for managing platform virtualization virt-install rhel-9-for-x86_64-appstream-rpms A command-line utility for building VMs libguestfs rhel-9-for-x86_64-appstream-rpms A library for accessing and modifying VM file systems guestfs-tools rhel-9-for-x86_64-appstream-rpms System administration tools for VMs; includes the virt-customize utility 3.2.3. Azure VM configuration settings Azure VMs must have the following configuration settings. Some of these settings are enabled during the initial VM creation. Other settings are set when provisioning the VM image for Azure. Keep these settings in mind as you move through the procedures. Refer to them as necessary. Table 3.3. VM configuration settings Setting Recommendation ssh ssh must be enabled to provide remote access to your Azure VMs. dhcp The primary virtual adapter should be configured for dhcp (IPv4 only). Swap Space Do not create a dedicated swap file or swap partition. You can configure swap space with the Windows Azure Linux Agent (WALinuxAgent). NIC Choose virtio for the primary virtual network adapter. encryption For custom images, use Network Bound Disk Encryption (NBDE) for full disk encryption on Azure. 3.2.4. Creating a base image from an ISO image The following procedure lists the steps and initial configuration requirements for creating a custom ISO image. Once you have configured the image, you can use the image as a template for creating additional VM instances. Prerequisites Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 9 for information and procedures. Procedure Download the latest Red Hat Enterprise Linux 9 DVD ISO image from the Red Hat Customer Portal . Create and start a basic Red Hat Enterprise Linux VM. For instructions, see Creating virtual machines . If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio . For example, the following command creates a kvmtest VM by using the rhel-9.0-x86_64-kvm.qcow2 image: If you use the web console to create your VM, follow the procedure in Creating virtual machines using the web console , with these caveats: Do not check Immediately Start VM . Change your Memory size to your preferred settings. Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM. Review the following additional installation selection and modifications. Select Minimal Install with the standard RHEL option. For Installation Destination , select Custom Storage Configuration . Use the following configuration information to make your selections. Verify at least 500 MB for /boot . For file system, use xfs, ext4, or ext3 for both boot and root partitions. Remove swap space. Swap space is configured on the physical blade server in Azure by the WALinuxAgent. On the Installation Summary screen, select Network and Host Name . Switch Ethernet to On . When the install starts: Create a root password. Create an administrative user account. When installation is complete, reboot the VM and log in to the root account. Once you are logged in as root , you can configure the image. 3.3. Configuring a custom base image for Microsoft Azure To deploy a RHEL 9 virtual machine (VM) with specific settings in Azure, you can create a custom base image for the VM. The following sections describe additional configuration changes that Azure requires. 3.3.1. Installing Hyper-V device drivers Microsoft provides network and storage device drivers as part of their Linux Integration Services (LIS) for Hyper-V package. You may need to install Hyper-V device drivers on the VM image prior to provisioning it as an Azure virtual machine (VM). Use the lsinitrd | grep hv command to verify that the drivers are installed. Procedure Enter the following grep command to determine if the required Hyper-V device drivers are installed. In the example below, all required drivers are installed. If all the drivers are not installed, complete the remaining steps. Note An hv_vmbus driver may exist in the environment. Even if this driver is present, complete the following steps. Create a file named hv.conf in /etc/dracut.conf.d . Add the following driver parameters to the hv.conf file. Note Note the spaces before and after the quotes, for example, add_drivers+=" hv_vmbus " . This ensures that unique drivers are loaded in the event that other Hyper-V drivers already exist in the environment. Regenerate the initramfs image. Verification Reboot the machine. Run the lsinitrd | grep hv command to verify that the drivers are installed. 3.3.2. Making configuration changes required for a Microsoft Azure deployment Before you deploy your custom base image to Azure, you must perform additional configuration changes to ensure that the virtual machine (VM) can properly operate in Azure. Procedure Log in to the VM. Register the VM, and enable the Red Hat Enterprise Linux 9 repository. Ensure that the cloud-init and hyperv-daemons packages are installed. Create cloud-init configuration files that are needed for integration with Azure services: To enable logging to the Hyper-V Data Exchange Service (KVP), create the /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg configuration file and add the following lines to that file. To add Azure as a datasource, create the /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg configuration file, and add the following lines to that file. To ensure that specific kernel modules are blocked from loading automatically, edit or create the /etc/modprobe.d/blocklist.conf file and add the following lines to that file. Modify udev network device rules: Remove the following persistent network device rules if present. To ensure that Accelerated Networking on Azure works as intended, create a new network device rule /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules and add the following line to it. Set the sshd service to start automatically. Modify kernel boot parameters: Open the /etc/default/grub file, and ensure the GRUB_TIMEOUT line has the following value. Remove the following options from the end of the GRUB_CMDLINE_LINUX line if present. Ensure the /etc/default/grub file contains the following lines with all the specified options. Note If you do not plan to run your workloads on HDDs, add elevator=none to the end of the GRUB_CMDLINE_LINUX line. This sets the I/O scheduler to none , which improves I/O performance when running workloads on SSDs. Regenerate the grub.cfg file. On a BIOS-based machine: In RHEL 9.2 and earlier: In RHEL 9.3 and later: On a UEFI-based machine: In RHEL 9.2 and earlier: In RHEL 9.3 and later: Warning The path to rebuild grub.cfg is same for both BIOS and UEFI based machines. Actual grub.cfg is present at BIOS path only. The UEFI path has a stub file that must not be modified or recreated using grub2-mkconfig command. If your system uses a non-default location for grub.cfg , adjust the command accordingly. Configure the Windows Azure Linux Agent ( WALinuxAgent ): Install and enable the WALinuxAgent package. To ensure that a swap partition is not used in provisioned VMs, edit the following lines in the /etc/waagent.conf file. Prepare the VM for Azure provisioning: Unregister the VM from Red Hat Subscription Manager. Clean up the existing provisioning details. Note This command generates warnings, which are expected because Azure handles the provisioning of VMs automatically. Clean the shell history and shut down the VM. 3.4. Converting the image to a fixed VHD format All Microsoft Azure VM images must be in a fixed VHD format. The image must be aligned on a 1 MB boundary before it is converted to VHD. To convert the image from qcow2 to a fixed VHD format and align the image, see the following procedure. Once you have converted the image, you can upload it to Azure. Procedure Convert the image from qcow2 to raw format. Create a shell script with the following content. Run the script. This example uses the name align.sh . If the message "Your image is already aligned. You do not need to resize." displays, proceed to the following step. If a value displays, your image is not aligned. Use the following command to convert the file to a fixed VHD format. The sample uses qemu-img version 2.12.0. Once converted, the VHD file is ready to upload to Azure. If the raw image is not aligned, complete the following steps to align it. Resize the raw file by using the rounded value displayed when you ran the verification script. Convert the raw image file to a VHD format. The sample uses qemu-img version 2.12.0. Once converted, the VHD file is ready to upload to Azure. 3.5. Installing the Azure CLI Complete the following steps to install the Azure command-line interface (Azure CLI 2.1). Azure CLI 2.1 is a Python-based utility that creates and manages VMs in Azure. Prerequisites You need to have an account with Microsoft Azure before you can use the Azure CLI. The Azure CLI installation requires Python 3.x. Procedure Import the Microsoft repository key. Create a local Azure CLI repository entry. Update the dnf package index. Check your Python version ( python --version ) and install Python 3.x, if necessary. Install the Azure CLI. Run the Azure CLI. Additional resources Azure CLI Azure CLI command reference 3.6. Creating resources in Azure Complete the following procedure to create the Azure resources that you need before you can upload the VHD file and create the Azure image. Procedure Authenticate your system with Azure and log in. Note If a browser is available in your environment, the CLI opens your browser to the Azure sign-in page. See Sign in with Azure CLI for more information and options. Create a resource group in an Azure region. Example: Create a storage account. See SKU Types for more information about valid SKU values. Example: Get the storage account connection string. Example: Export the connection string by copying the connection string and pasting it into the following command. This string connects your system to the storage account. Example: Create the storage container. Example: Create a virtual network. Example: Additional resources Azure Managed Disks Overview SKU Types 3.7. Uploading and creating an Azure image Complete the following steps to upload the VHD file to your container and create an Azure custom image. Note The exported storage connection string does not persist after a system reboot. If any of the commands in the following steps fail, export the connection string again. Procedure Upload the VHD file to the storage container. It may take several minutes. To get a list of storage containers, enter the az storage container list command. Example: Get the URL for the uploaded VHD file to use in the following step. Example: Create the Azure custom image. Note The default hypervisor generation of the VM is V1. You can optionally specify a V2 hypervisor generation by including the option --hyper-v-generation V2 . Generation 2 VMs use a UEFI-based boot architecture. See Support for generation 2 VMs on Azure for information about generation 2 VMs. The command may return the error "Only blobs formatted as VHDs can be imported." This error may mean that the image was not aligned to the nearest 1 MB boundary before it was converted to VHD . Example: 3.8. Creating and starting the VM in Azure The following steps provide the minimum command options to create a managed-disk Azure VM from the image. See az vm create for additional options. Procedure Enter the following command to create the VM. Note The option --generate-ssh-keys creates a private/public key pair. Private and public key files are created in ~/.ssh on your system. The public key is added to the authorized_keys file on the VM for the user specified by the --admin-username option. See Other authentication methods for additional information. Example: [clouduser@localhost]USD az vm create \ -g azrhelclirsgrp2 -l southcentralus -n rhel-azure-vm-1 \ --vnet-name azrhelclivnet1 --subnet azrhelclisubnet1 --size Standard_A2 \ --os-disk-name vm-1-osdisk --admin-username clouduser \ --generate-ssh-keys --image rhel9 { "fqdns": "", "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/virtualMachines/rhel-azure-vm-1", "location": "southcentralus", "macAddress": "", "powerState": "VM running", "privateIpAddress": "10.0.0.4", "publicIpAddress": " <public-IP-address> ", "resourceGroup": "azrhelclirsgrp2" Note the publicIpAddress . You need this address to log in to the VM in the following step. Start an SSH session and log in to the VM. If you see a user prompt, you have successfully deployed your Azure VM. You can now go to the Microsoft Azure portal and check the audit logs and properties of your resources. You can manage your VMs directly in this portal. If you are managing multiple VMs, you should use the Azure CLI. The Azure CLI provides a powerful interface to your resources in Azure. Enter az --help in the CLI or see the Azure CLI command reference to learn more about the commands you use to manage your VMs in Microsoft Azure. 3.9. Other authentication methods While recommended for increased security, using the Azure-generated key pair is not required. The following examples show two methods for SSH authentication. Example 1: These command options provision a new VM without generating a public key file. They allow SSH authentication by using a password. Example 2: These command options provision a new Azure VM and allow SSH authentication by using an existing public key file. 3.10. Attaching Red Hat subscriptions Using the subscription-manager command, you can register and attach your Red Hat subscription to a RHEL instance. Prerequisites You must have enabled your subscriptions. Procedure Register your system. Attach your subscriptions. You can use an activation key to attach subscriptions. See Creating Red Hat Customer Portal Activation Keys for more information. Alternatively, you can manually attach a subscription by using the ID of the subscription pool (Pool ID). See Attaching a host-based subscription to hypervisors . Optional: To collect various system metrics about the instance in the Red Hat Hybrid Cloud Console , you can register the instance with Red Hat Insights . For information on further configuration of Red Hat Insights, see Client Configuration Guide for Red Hat Insights . Additional resources Creating Red Hat Customer Portal Activation Keys Attaching a host-based subscription to hypervisors Client Configuration Guide for Red Hat Insights 3.11. Setting up automatic registration on Azure Gold Images To make deploying RHEL 9 virtual machines (VM) on Micorsoft Azure faster and more comfortable, you can set up Gold Images of RHEL 9 to be automatically registered to the Red Hat Subscription Manager (RHSM). Prerequisites RHEL 9 Gold Images are available to you in Microsoft Azure. For instructions, see Using Gold Images on Azure . Note A Microsoft Azure account can only be attached to a single Red Hat account at a time. Therefore, ensure no other users require access to the Azure account before attaching it to your Red Hat one. Procedure Use the Gold Image to create a RHEL 9 VM in your Azure instance. For instructions, see Creating and starting the VM in Azure . Start the created VM. In the RHEL 9 VM, enable automatic registration. Enable the rhsmcertd service. Disable the redhat.repo repository. Power off the VM, and save it as a managed image on Azure. For instructions, see How to create a managed image of a virtual machine or VHD . Create VMs by using the managed image. They will be automatically subscribed to RHSM. Verification In a RHEL 9 VM created using the above instructions, verify the system is registered to RHSM by executing the subscription-manager identity command. On a successfully registered system, this displays the UUID of the system. For example: Additional resources Red Hat Gold Images in Azure Overview of RHEL images in Azure Configuring cloud integration for Red Hat services 3.12. Configuring kdump for Microsoft Azure instances If a kernel crash occurs in a RHEL instance, you can use the kdump service to determine the cause of the crash. If kdump is configured correctly when your instance kernel terminates unexpectedly, kdump generates a dump file, known as crash dump or a vmcore file. You can then analyze the file to find why the crash occurred and to debug your system. For kdump to work on Microsoft Azure instances, you might need to adjust the kdump reserved memory and the vmcore target to fit VM sizes and RHEL versions. Prerequisites You are using a Microsoft Azure environment that supports kdump : Standard_DS2_v2 VM Standard NV16as v4 Standard M416-208s v2 Standard M416ms v2 You have root permissions on the system. Your system meets the requirements for kdump configurations and targets. For details, see Supported kdump configurations and targets . Procedure Ensure that kdump and other necessary packages are installed on your system. Verify that the default location for crash dump files is set in the kdump configuration file and that the /var/crash file is available. Based on the size and version of your RHEL virtual machine (VM) instance, decide whether you need a vmcore target with more free space, such as /mnt/crash . To do so, use the following table. Table 3.4. Virtual machine sizes that have been tested with GEN2 VM on Azure RHEL Version Standard DS1 v2 (1 vCPU, 3.5GiB) Standard NV16as v4 (16 vCPUs, 56 GiB) Standard M416-208s v2 (208 vCPUs, 5700 GiB) Standard M416ms v2 (416 vCPUs, 11400 GiB) RHEL 9.0 - RHEL 9.3 Default Default Target Target Default indicates that kdump works as expected with the default memory and the default kdump target. The default kdump target is /var/crash . Target indicates that kdump works as expected with the default memory. However, you might need to assign a target with more free space. If your instance requires it, assign a target with more free space, such as /mnt/crash . To do so, edit the /etc/kdump.conf file and replace the default path. The option path /mnt/crash represents the path to the file system in which kdump saves the crash dump file. For more options, such as writing the crash dump file to a different partition, directly to a device or storing it to a remote machine, see Configuring the kdump target . If your instance requires it, increase the crash kernel size to the sufficient size for kdump to capture the vmcore by adding the respective boot parameter. For example, for a Standard M416-208s v2 VM, the sufficient size is 512 MB, so the boot parameter would be crashkernel=512M . Open the GRUB configuration file and add crashkernel=512M to the boot parameter line. Update the GRUB configuration file. In RHEL 9.2 and earlier: In RHEL 9.3 and later: Reboot the VM to allocate separate kernel crash memory to the VM. Verification Ensure that kdump is active and running. Additional resources Installing kdump How to troubleshoot kernel crashes, hangs, or reboots with kdump on Red Hat Enterprise Linux (Red Hat Knowledgebase) Memory requirements for kdump 3.13. Additional resources Red Hat in the Public Cloud Red Hat Cloud Access Reference Guide Frequently Asked Questions and Recommended Practices for Microsoft Azure | [
"virt-install --name kvmtest --memory 2048 --vcpus 2 --disk rhel-9.0-x86_64-kvm.qcow2,bus=virtio --import --os-variant=rhel9.0",
"lsinitrd | grep hv",
"lsinitrd | grep hv drwxr-xr-x 2 root root 0 Aug 12 14:21 usr/lib/modules/3.10.0-932.el9.x86_64/kernel/drivers/hv -rw-r--r-- 1 root root 31272 Aug 11 08:45 usr/lib/modules/3.10.0-932.el9.x86_64/kernel/drivers/hv/hv_vmbus.ko.xz -rw-r--r-- 1 root root 25132 Aug 11 08:46 usr/lib/modules/3.10.0-932.el9.x86_64/kernel/drivers/net/hyperv/hv_netvsc.ko.xz -rw-r--r-- 1 root root 9796 Aug 11 08:45 usr/lib/modules/3.10.0-932.el9.x86_64/kernel/drivers/scsi/hv_storvsc.ko.xz",
"add_drivers+=\" hv_vmbus \" add_drivers+=\" hv_netvsc \" add_drivers+=\" hv_storvsc \" add_drivers+=\" nvme \"",
"dracut -f -v --regenerate-all",
"subscription-manager register --auto-attach Installed Product Current Status: Product Name: Red Hat Enterprise Linux for x86_64 Status: Subscribed",
"dnf install cloud-init hyperv-daemons -y",
"reporting: logging: type: log telemetry: type: hyperv",
"datasource_list: [ Azure ] datasource: Azure: apply_network_config: False",
"blacklist nouveau blacklist lbm-nouveau blacklist floppy blacklist amdgpu blacklist skx_edac blacklist intel_cstate",
"rm -f /etc/udev/rules.d/70-persistent-net.rules rm -f /etc/udev/rules.d/75-persistent-net-generator.rules rm -f /etc/udev/rules.d/80-net-name-slot-rules",
"SUBSYSTEM==\"net\", DRIVERS==\"hv_pci\", ACTION==\"add\", ENV{NM_UNMANAGED}=\"1\"",
"systemctl enable sshd systemctl is-enabled sshd",
"GRUB_TIMEOUT=10",
"rhgb quiet",
"GRUB_CMDLINE_LINUX=\"loglevel=3 crashkernel=auto console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300\" GRUB_TIMEOUT_STYLE=countdown GRUB_TERMINAL=\"serial console\" GRUB_SERIAL_COMMAND=\"serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1\"",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/grub2/grub.cfg --update-bls-cmdline",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/grub2/grub.cfg --update-bls-cmdline",
"dnf install WALinuxAgent -y systemctl enable waagent",
"Provisioning.DeleteRootPassword=y ResourceDisk.Format=n ResourceDisk.EnableSwap=n",
"subscription-manager unregister",
"waagent -force -deprovision",
"export HISTSIZE=0 poweroff",
"qemu-img convert -f qcow2 -O raw <image-name> .qcow2 <image-name> .raw",
"#!/bin/bash MB=USD((1024 * 1024)) size=USD(qemu-img info -f raw --output json \"USD1\" | gawk 'match(USD0, /\"virtual-size\": ([0-9]+),/, val) {print val[1]}') rounded_size=USD(((USDsize/USDMB + 1) * USDMB)) if [ USD((USDsize % USDMB)) -eq 0 ] then echo \"Your image is already aligned. You do not need to resize.\" exit 1 fi echo \"rounded size = USDrounded_size\" export rounded_size",
"sh align.sh <image-xxx> .raw",
"qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx> .raw <image.xxx> .vhd",
"qemu-img resize -f raw <image-xxx> .raw <rounded-value>",
"qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx> .raw <image.xxx> .vhd",
"sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc",
"sudo sh -c 'echo -e \"[azure-cli]\\nname=Azure CLI\\nbaseurl=https://packages.microsoft.com/yumrepos/azure-cli\\nenabled=1\\ngpgcheck=1\\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc\" > /etc/yum.repos.d/azure-cli.repo'",
"dnf check-update",
"sudo dnf install python3",
"sudo dnf install -y azure-cli",
"az",
"az login",
"az group create --name <resource-group> --location <azure-region>",
"[clouduser@localhost]USD az group create --name azrhelclirsgrp --location southcentralus { \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp\", \"location\": \"southcentralus\", \"managedBy\": null, \"name\": \"azrhelclirsgrp\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": null }",
"az storage account create -l <azure-region> -n <storage-account-name> -g <resource-group> --sku <sku_type>",
"[clouduser@localhost]USD az storage account create -l southcentralus -n azrhelclistact -g azrhelclirsgrp --sku Standard_LRS { \"accessTier\": null, \"creationTime\": \"2017-04-05T19:10:29.855470+00:00\", \"customDomain\": null, \"encryption\": null, \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Storage/storageAccounts/azrhelclistact\", \"kind\": \"StorageV2\", \"lastGeoFailoverTime\": null, \"location\": \"southcentralus\", \"name\": \"azrhelclistact\", \"primaryEndpoints\": { \"blob\": \"https://azrhelclistact.blob.core.windows.net/\", \"file\": \"https://azrhelclistact.file.core.windows.net/\", \"queue\": \"https://azrhelclistact.queue.core.windows.net/\", \"table\": \"https://azrhelclistact.table.core.windows.net/\" }, \"primaryLocation\": \"southcentralus\", \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"secondaryEndpoints\": null, \"secondaryLocation\": null, \"sku\": { \"name\": \"Standard_LRS\", \"tier\": \"Standard\" }, \"statusOfPrimary\": \"available\", \"statusOfSecondary\": null, \"tags\": {}, \"type\": \"Microsoft.Storage/storageAccounts\" }",
"az storage account show-connection-string -n <storage-account-name> -g <resource-group>",
"[clouduser@localhost]USD az storage account show-connection-string -n azrhelclistact -g azrhelclirsgrp { \"connectionString\": \"DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==\" }",
"export AZURE_STORAGE_CONNECTION_STRING=\"<storage-connection-string>\"",
"[clouduser@localhost]USD export AZURE_STORAGE_CONNECTION_STRING=\"DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==\"",
"az storage container create -n <container-name>",
"[clouduser@localhost]USD az storage container create -n azrhelclistcont { \"created\": true }",
"az network vnet create -g <resource group> --name <vnet-name> --subnet-name <subnet-name>",
"[clouduser@localhost]USD az network vnet create --resource-group azrhelclirsgrp --name azrhelclivnet1 --subnet-name azrhelclisubnet1 { \"newVNet\": { \"addressSpace\": { \"addressPrefixes\": [ \"10.0.0.0/16\" ] }, \"dhcpOptions\": { \"dnsServers\": [] }, \"etag\": \"W/\\\"\\\"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1\", \"location\": \"southcentralus\", \"name\": \"azrhelclivnet1\", \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"resourceGuid\": \"0f25efee-e2a6-4abe-a4e9-817061ee1e79\", \"subnets\": [ { \"addressPrefix\": \"10.0.0.0/24\", \"etag\": \"W/\\\"\\\"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1/subnets/azrhelclisubnet1\", \"ipConfigurations\": null, \"name\": \"azrhelclisubnet1\", \"networkSecurityGroup\": null, \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"resourceNavigationLinks\": null, \"routeTable\": null } ], \"tags\": {}, \"type\": \"Microsoft.Network/virtualNetworks\", \"virtualNetworkPeerings\": null } }",
"az storage blob upload --account-name <storage-account-name> --container-name <container-name> --type page --file <path-to-vhd> --name <image-name>.vhd",
"[clouduser@localhost]USD az storage blob upload --account-name azrhelclistact --container-name azrhelclistcont --type page --file rhel-image-{ProductNumber}.vhd --name rhel-image-{ProductNumber}.vhd Percent complete: %100.0",
"az storage blob url -c <container-name> -n <image-name>.vhd",
"az storage blob url -c azrhelclistcont -n rhel-image-9.vhd \"https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-9.vhd\"",
"az image create -n <image-name> -g <resource-group> -l <azure-region> --source <URL> --os-type linux",
"az image create -n rhel9 -g azrhelclirsgrp2 -l southcentralus --source https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-9.vhd --os-type linux",
"az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --admin-username <administrator-name> --generate-ssh-keys --image <path-to-image>",
"[clouduser@localhost]USD az vm create -g azrhelclirsgrp2 -l southcentralus -n rhel-azure-vm-1 --vnet-name azrhelclivnet1 --subnet azrhelclisubnet1 --size Standard_A2 --os-disk-name vm-1-osdisk --admin-username clouduser --generate-ssh-keys --image rhel9 { \"fqdns\": \"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/virtualMachines/rhel-azure-vm-1\", \"location\": \"southcentralus\", \"macAddress\": \"\", \"powerState\": \"VM running\", \"privateIpAddress\": \"10.0.0.4\", \"publicIpAddress\": \" <public-IP-address> \", \"resourceGroup\": \"azrhelclirsgrp2\"",
"[clouduser@localhost]USD ssh -i /home/clouduser/.ssh/id_rsa clouduser@ <public-IP-address> . The authenticity of host ', <public-IP-address> ' can't be established. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ' <public-IP-address> ' (ECDSA) to the list of known hosts. [clouduser@rhel-azure-vm-1 ~]USD",
"az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --authentication-type password --admin-username <administrator-name> --admin-password <ssh-password> --image <path-to-image>",
"ssh <admin-username> @ <public-ip-address>",
"az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --admin-username <administrator-name> --ssh-key-value <path-to-existing-ssh-key> --image <path-to-image>",
"ssh -i <path-to-existing-ssh-key> <admin-username> @ <public-ip-address>",
"subscription-manager register --auto-attach",
"insights-client register --display-name <display-name-value>",
"subscription-manager config --rhsmcertd.auto_registration=1",
"systemctl enable rhsmcertd.service",
"subscription-manager config --rhsm.manage_repos=0",
"subscription-manager identity system identity: fdc46662-c536-43fb-a18a-bbcb283102b7 name: 192.168.122.222 org name: 6340056 org ID: 6340056",
"dnf install kexec-tools",
"grep -v \"#\" /etc/kdump.conf path /var/crash core_collector makedumpfile -l --message-level 7 -d 31",
"sed s/\"path /var/crash\"/\"path /mnt/crash\"",
"vi /etc/default/grub GRUB_CMDLINE_LINUX=\"console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300 crashkernel=512M\"",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/grub2/grub.cfg --update-bls-cmdline",
"systemctl status kdump ● kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor prese> Active: active (exited) since Fri 2024-02-09 10:50:18 CET; 1h 20min ago Process: 1252 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCES> Main PID: 1252 (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 16975) Memory: 512B CGroup: /system.slice/kdump.service"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/deploying_rhel_9_on_microsoft_azure/assembly_deploying-a-rhel-image-as-a-virtual-machine-on-microsoft-azure_cloud-content-azure |
Chapter 2. Installing .NET 8.0 | Chapter 2. Installing .NET 8.0 .NET 8.0 is included in the AppStream repositories for RHEL 8. The AppStream repositories are enabled by default on RHEL 8 systems. You can install the .NET 8.0 runtime with the latest 8.0 Software Development Kit (SDK). When a newer SDK becomes available for .NET 8.0, you can install it by running sudo yum install . Prerequisites Installed and registered RHEL 8.9 with attached subscriptions. For more information, see Performing a standard RHEL 8 installation . Procedure Install .NET 8.0 and all of its dependencies: Verification steps Verify the installation: The output returns the relevant information about the .NET installation and the environment. | [
"sudo yum install dotnet-sdk-8.0 -y",
"dotnet --info"
]
| https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_8/installing-dotnet_getting-started-with-dotnet-on-rhel-8 |
Chapter 4. Kubernetes overview | Chapter 4. Kubernetes overview Kubernetes is an open source container orchestration tool developed by Google. You can run and manage container-based workloads by using Kubernetes. The most common Kubernetes use case is to deploy an array of interconnected microservices, building an application in a cloud native way. You can create Kubernetes clusters that can span hosts across on-premise, public, private, or hybrid clouds. Traditionally, applications were deployed on top of a single operating system. With virtualization, you can split the physical host into several virtual hosts. Working on virtual instances on shared resources is not optimal for efficiency and scalability. Because a virtual machine (VM) consumes as many resources as a physical machine, providing resources to a VM such as CPU, RAM, and storage can be expensive. Also, you might see your application degrading in performance due to virtual instance usage on shared resources. Figure 4.1. Evolution of container technologies for classical deployments To solve this problem, you can use containerization technologies that segregate applications in a containerized environment. Similar to a VM, a container has its own filesystem, vCPU, memory, process space, dependencies, and more. Containers are decoupled from the underlying infrastructure, and are portable across clouds and OS distributions. Containers are inherently much lighter than a fully-featured OS, and are lightweight isolated processes that run on the operating system kernel. VMs are slower to boot, and are an abstraction of physical hardware. VMs run on a single machine with the help of a hypervisor. You can perform the following actions by using Kubernetes: Sharing resources Orchestrating containers across multiple hosts Installing new hardware configurations Running health checks and self-healing applications Scaling containerized applications 4.1. Kubernetes components Table 4.1. Kubernetes components Component Purpose kube-proxy Runs on every node in the cluster and maintains the network traffic between the Kubernetes resources. kube-controller-manager Governs the state of the cluster. kube-scheduler Allocates pods to nodes. etcd Stores cluster data. kube-apiserver Validates and configures data for the API objects. kubelet Runs on nodes and reads the container manifests. Ensures that the defined containers have started and are running. kubectl Allows you to define how you want to run workloads. Use the kubectl command to interact with the kube-apiserver . Node Node is a physical machine or a VM in a Kubernetes cluster. The control plane manages every node and schedules pods across the nodes in the Kubernetes cluster. container runtime container runtime runs containers on a host operating system. You must install a container runtime on each node so that pods can run on the node. Persistent storage Stores the data even after the device is shut down. Kubernetes uses persistent volumes to store the application data. container-registry Stores and accesses the container images. Pod The pod is the smallest logical unit in Kubernetes. A pod contains one or more containers to run in a worker node. 4.2. Kubernetes resources A custom resource is an extension of the Kubernetes API. You can customize Kubernetes clusters by using custom resources. Operators are software extensions which manage applications and their components with the help of custom resources. Kubernetes uses a declarative model when you want a fixed desired result while dealing with cluster resources. By using Operators, Kubernetes defines its states in a declarative way. You can modify the Kubernetes cluster resources by using imperative commands. An Operator acts as a control loop which continuously compares the desired state of resources with the actual state of resources and puts actions in place to bring reality in line with the desired state. Figure 4.2. Kubernetes cluster overview Table 4.2. Kubernetes Resources Resource Purpose Service Kubernetes uses services to expose a running application on a set of pods. ReplicaSets Kubernetes uses the ReplicaSets to maintain the constant pod number. Deployment A resource object that maintains the life cycle of an application. Kubernetes is a core component of an OpenShift Container Platform. You can use OpenShift Container Platform for developing and running containerized applications. With its foundation in Kubernetes, the OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. You can extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments by using the OpenShift Container Platform. Figure 4.3. Architecture of Kubernetes A cluster is a single computational unit consisting of multiple nodes in a cloud environment. A Kubernetes cluster includes a control plane and worker nodes. You can run Kubernetes containers across various machines and environments. The control plane node controls and maintains the state of a cluster. You can run the Kubernetes application by using worker nodes. You can use the Kubernetes namespace to differentiate cluster resources in a cluster. Namespace scoping is applicable for resource objects, such as deployment, service, and pods. You cannot use namespace for cluster-wide resource objects such as storage class, nodes, and persistent volumes. 4.3. Kubernetes conceptual guidelines Before getting started with the OpenShift Container Platform, consider these conceptual guidelines of Kubernetes: Start with one or more worker nodes to run the container workloads. Manage the deployment of those workloads from one or more control plane nodes. Wrap containers in a deployment unit called a pod. By using pods provides extra metadata with the container and offers the ability to group several containers in a single deployment entity. Create special kinds of assets. For example, services are represented by a set of pods and a policy that defines how they are accessed. This policy allows containers to connect to the services that they need even if they do not have the specific IP addresses for the services. Replication controllers are another special asset that indicates how many pod replicas are required to run at a time. You can use this capability to automatically scale your application to adapt to its current demand. The API to OpenShift Container Platform cluster is 100% Kubernetes. Nothing changes between a container running on any other Kubernetes and running on OpenShift Container Platform. No changes to the application. OpenShift Container Platform brings added-value features to provide enterprise-ready enhancements to Kubernetes. OpenShift Container Platform CLI tool ( oc ) is compatible with kubectl . While the Kubernetes API is 100% accessible within OpenShift Container Platform, the kubectl command-line lacks many features that could make it more user-friendly. OpenShift Container Platform offers a set of features and command-line tool like oc . Although Kubernetes excels at managing your applications, it does not specify or manage platform-level requirements or deployment processes. Powerful and flexible platform management tools and processes are important benefits that OpenShift Container Platform offers. You must add authentication, networking, security, monitoring, and logs management to your containerization platform. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/about/kubernetes-overview |
Chapter 2. Configuring audit logs for Developer Hub on OpenShift Container Platform | Chapter 2. Configuring audit logs for Developer Hub on OpenShift Container Platform Use the OpenShift Container Platform web console to configure the following OpenShift Container Platform logging components to use audit logging for Developer Hub: Logging deployment Configure the logging environment, including both the CPU and memory limits for each logging component. For more information, see Red Hat OpenShift Container Platform - Configuring your Logging deployment . Logging collector Configure the spec.collection stanza in the ClusterLogging custom resource (CR) to use a supported modification to the log collector and collect logs from STDOUT . For more information, see Red Hat OpenShift Container Platform - Configuring the logging collector . Log forwarding Send logs to specific endpoints inside and outside your OpenShift Container Platform cluster by specifying a combination of outputs and pipelines in a ClusterLogForwarder CR. For more information, see Red Hat OpenShift Container Platform - Enabling JSON log forwarding and Red Hat OpenShift Container Platform - Configuring log forwarding . 2.1. Forwarding Red Hat Developer Hub audit logs to Splunk You can use the Red Hat OpenShift Logging (OpenShift Logging) Operator and a ClusterLogForwarder instance to capture the streamed audit logs from a Developer Hub instance and forward them to the HTTPS endpoint associated with your Splunk instance. Prerequisites You have a cluster running on a supported OpenShift Container Platform version. You have an account with cluster-admin privileges. You have a Splunk Cloud account or Splunk Enterprise installation. Procedure Log in to your OpenShift Container Platform cluster. Install the OpenShift Logging Operator in the openshift-logging namespace and switch to the namespace: Example command to switch to a namespace oc project openshift-logging Create a serviceAccount named log-collector and bind the collect-application-logs role to the serviceAccount : Example command to create a serviceAccount oc create sa log-collector Example command to bind a role to a serviceAccount oc create clusterrolebinding log-collector --clusterrole=collect-application-logs --serviceaccount=openshift-logging:log-collector Generate a hecToken in your Splunk instance. Create a key/value secret in the openshift-logging namespace and verify the secret: Example command to create a key/value secret with hecToken oc -n openshift-logging create secret generic splunk-secret --from-literal=hecToken=<HEC_Token> Example command to verify a secret oc -n openshift-logging get secret/splunk-secret -o yaml Create a basic `ClusterLogForwarder`resource YAML file as follows: Example `ClusterLogForwarder`resource YAML file apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging For more information, see Creating a log forwarder . Define the following ClusterLogForwarder configuration using OpenShift web console or OpenShift CLI: Specify the log-collector as serviceAccount in the YAML file: Example serviceAccount configuration serviceAccount: name: log-collector Configure inputs to specify the type and source of logs to forward. The following configuration enables the forwarder to capture logs from all applications in a provided namespace: Example inputs configuration inputs: - name: my-app-logs-input type: application application: includes: - namespace: my-rhdh-project containerLimit: maxRecordsPerSecond: 100 For more information, see Forwarding application logs from specific pods . Configure outputs to specify where the captured logs are sent. In this step, focus on the splunk type. You can either use tls.insecureSkipVerify option if the Splunk endpoint uses self-signed TLS certificates (not recommended) or provide the certificate chain using a Secret. Example outputs configuration outputs: - name: splunk-receiver-application type: splunk splunk: authentication: token: key: hecToken secretName: splunk-secret index: main url: 'https://my-splunk-instance-url' rateLimit: maxRecordsPerSecond: 250 For more information, see Forwarding logs to Splunk in OpenShift Container Platform documentation. Optional: Filter logs to include only audit logs: Example filters configuration filters: - name: audit-logs-only type: drop drop: - test: - field: .message notMatches: isAuditLog For more information, see Filtering logs by content in OpenShift Container Platform documentation. Configure pipelines to route logs from specific inputs to designated outputs. Use the names of the defined inputs and outputs to specify multiple inputRefs and outputRefs in each pipeline: Example pipelines configuration pipelines: - name: my-app-logs-pipeline detectMultilineErrors: true inputRefs: - my-app-logs-input outputRefs: - splunk-receiver-application filterRefs: - audit-logs-only Run the following command to apply the ClusterLogForwarder configuration: Example command to apply ClusterLogForwarder configuration oc apply -f <ClusterLogForwarder-configuration.yaml> Optional: To reduce the risk of log loss, configure your ClusterLogForwarder pods using the following options: Define the resource requests and limits for the log collector as follows: Example collector configuration collector: resources: requests: cpu: 250m memory: 64Mi ephemeral-storage: 250Mi limits: cpu: 500m memory: 128Mi ephemeral-storage: 500Mi Define tuning options for log delivery, including delivery , compression , and RetryDuration . Tuning can be applied per output as needed. Example tuning configuration tuning: delivery: AtLeastOnce 1 compression: none minRetryDuration: 1s maxRetryDuration: 10s 1 AtLeastOnce delivery mode means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash. Verification Confirm that logs are being forwarded to your Splunk instance by viewing them in the Splunk dashboard. Troubleshoot any issues using OpenShift Container Platform and Splunk logs as needed. | [
"project openshift-logging",
"create sa log-collector",
"create clusterrolebinding log-collector --clusterrole=collect-application-logs --serviceaccount=openshift-logging:log-collector",
"-n openshift-logging create secret generic splunk-secret --from-literal=hecToken=<HEC_Token>",
"-n openshift-logging get secret/splunk-secret -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging",
"serviceAccount: name: log-collector",
"inputs: - name: my-app-logs-input type: application application: includes: - namespace: my-rhdh-project containerLimit: maxRecordsPerSecond: 100",
"outputs: - name: splunk-receiver-application type: splunk splunk: authentication: token: key: hecToken secretName: splunk-secret index: main url: 'https://my-splunk-instance-url' rateLimit: maxRecordsPerSecond: 250",
"filters: - name: audit-logs-only type: drop drop: - test: - field: .message notMatches: isAuditLog",
"pipelines: - name: my-app-logs-pipeline detectMultilineErrors: true inputRefs: - my-app-logs-input outputRefs: - splunk-receiver-application filterRefs: - audit-logs-only",
"apply -f <ClusterLogForwarder-configuration.yaml>",
"collector: resources: requests: cpu: 250m memory: 64Mi ephemeral-storage: 250Mi limits: cpu: 500m memory: 128Mi ephemeral-storage: 500Mi",
"tuning: delivery: AtLeastOnce 1 compression: none minRetryDuration: 1s maxRetryDuration: 10s"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/audit_log/con-audit-log-config_assembly-audit-log |
Chapter 4. Validating the syntax of existing attribute values | Chapter 4. Validating the syntax of existing attribute values With syntax validation, the Directory Server checks if an attribute value follows the rules of the syntax provided in the definition of that attribute. The Directory Server records the results of syntax validation tasks in the /var/log/dirsrv/slapd-instance_name/errors file. Manual syntax validation is required if: You have the syntax validation disabled in the nsslapd-syntaxcheck parameter. Note Red Hat recommends that syntax validation should not be disabled. You migrate data from a server with disabled or without syntax validation. 4.1. Creating a syntax validation task using the dsconf schema validate-syntax command With the dsconf schema validate-syntax command, you can create a syntax validation task to check every modified attribute and ensure that the new value has the required syntax. Procedure To create a syntax validation task, enter: # dsconf -D "cn=Directory Manager" ldap://server.example.com schema validate-syntax -f '(objectclass=inetorgperson)' ou=People,dc=example,dc=com In the example output, the command creates a task that validates the syntax of all values in the ou=People,dc=example,dc=com sub-tree which match the (objectclass=inetorgperson) filter. 4.2. Creating a syntax validation task using a cn task entry The cn=tasks,cn=config entry in the Directory Server configuration is a container entry for temporary entries used by the server for managing tasks. You can initiate a syntax validation operation by creating a task in the cn=syntax validate,cn=tasks,cn=config entry. Procedure To initiate a syntax validation operation, create a task in the cn=syntax validate,cn=tasks,cn=config entry as follows: # ldapadd -D "cn=Directory Manager" -W -p 389 -H ldap://server.example.com -x dn: cn=example_syntax_validate,cn=syntax validate,cn=tasks,cn=config objectclass: extensibleObject cn: cn=example_syntax_validate basedn: ou=People,dc=example,dc=com filter: (objectclass=inetorgperson) In the example output, the command creates a task that validates the syntax of all values in the ou=People,dc=example,dc=com sub-tree that is similar to the (objectclass=inetorgperson) filter. When the task completes, Directory Server deletes the entry from the directory configuration. Additional resources Configuration and schema reference | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com schema validate-syntax -f '(objectclass=inetorgperson)' ou=People,dc=example,dc=com",
"ldapadd -D \"cn=Directory Manager\" -W -p 389 -H ldap://server.example.com -x dn: cn=example_syntax_validate,cn=syntax validate,cn=tasks,cn=config objectclass: extensibleObject cn: cn=example_syntax_validate basedn: ou=People,dc=example,dc=com filter: (objectclass=inetorgperson)"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/managing_the_directory_schema/assembly_validating-the-syntax-of-existing-attribute-values_managing-the-directory-schema |
Configuring Capsules with a load balancer | Configuring Capsules with a load balancer Red Hat Satellite 6.16 Distribute load among Capsules Red Hat Satellite Documentation Team [email protected] | [
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \"",
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \" --insecure true",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"] }}'",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"], \"insecure\": true }}'",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms --enable=satellite-capsule-6.16-for-rhel-9-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-9-x86_64-rpms",
"dnf repolist enabled",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-capsule-6.16-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-8-x86_64-rpms",
"dnf module enable satellite-capsule:el8",
"dnf repolist enabled",
"dnf upgrade",
"dnf install satellite-capsule",
"capsule-certs-generate --certs-tar \"/root/ capsule.example.com -certs.tar\" --foreman-proxy-cname loadbalancer.example.com --foreman-proxy-fqdn capsule.example.com",
"scp /root/ capsule.example.com -certs.tar root@ capsule.example.com :/root/ capsule.example.com -certs.tar",
"--certs-cname \" loadbalancer.example.com \" --enable-foreman-proxy-plugin-remote-execution-script",
"satellite-installer --scenario capsule --certs-cname \" loadbalancer.example.com \" --certs-tar-file \" capsule.example.com-certs.tar \" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \"",
"mkdir /root/capsule_cert",
"openssl genrsa -out /root/capsule_cert/capsule_cert_key.pem 4096",
"[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name x509_extensions = usr_cert prompt = no [ req_distinguished_name ] commonName = capsule.example.com 1 [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [alt_names] 2 DNS.1 = loadbalancer.example.com DNS.2 = capsule.example.com",
"[req_distinguished_name] CN = capsule.example.com countryName = My_Country_Name 1 stateOrProvinceName = My_State_Or_Province_Name 2 localityName = My_Locality_Name 3 organizationName = My_Organization_Or_Company_Name organizationalUnitName = My_Organizational_Unit_Name 4",
"openssl req -new -key /root/capsule_cert/capsule_cert_key.pem \\ 1 -config /root/capsule_cert/openssl.cnf \\ 2 -out /root/capsule_cert/capsule_cert_csr.pem 3",
"katello-certs-check -c /root/capsule_cert/capsule_cert.pem \\ 1 -k /root/capsule_cert/capsule_cert_key.pem \\ 2 -b /root/capsule_cert/ca_cert_bundle.pem 3",
"--foreman-proxy-cname loadbalancer.example.com",
"capsule-certs-generate --certs-tar /root/capsule_cert/capsule.tar --foreman-proxy-cname loadbalancer.example.com --foreman-proxy-fqdn capsule.example.com --server-ca-cert /root/capsule_cert/ca_cert_bundle.pem --server-cert /root/capsule_cert/capsule.pem --server-key /root/capsule_cert/capsule.pem",
"scp /root/ capsule.example.com -certs.tar root@ capsule.example.com : capsule.example.com -certs.tar",
"--certs-cname \" loadbalancer.example.com \" --enable-foreman-proxy-plugin-remote-execution-script",
"satellite-installer --scenario capsule --certs-cname \" loadbalancer.example.com \" --certs-tar-file \" capsule.example.com-certs.tar \" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \"",
"capsule-certs-generate --certs-tar \"/root/ capsule-ca.example.com -certs.tar\" --foreman-proxy-cname loadbalancer.example.com --foreman-proxy-fqdn capsule-ca.example.com",
"scp /root/ capsule-ca.example.com -certs.tar root@ capsule-ca.example.com : capsule-ca.example.com -certs.tar",
"--certs-cname \" loadbalancer.example.com \" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-puppetca \"true\" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server-ca \"true\"",
"satellite-installer --scenario capsule --certs-cname \" loadbalancer.example.com \" --certs-tar-file \" capsule-ca.example.com-certs.tar \" --enable-foreman-proxy-plugin-remote-execution-script --enable-puppet --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --foreman-proxy-puppetca \"true\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule-ca.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server true --puppet-server-ca \"true\"",
"systemctl stop puppetserver",
"puppetserver ca generate --ca-client --certname capsule.example.com --subject-alt-names loadbalancer.example.com",
"systemctl start puppetserver",
"capsule-certs-generate --certs-tar \"/root/ capsule.example.com -certs.tar\" --foreman-proxy-cname loadbalancer.example.com --foreman-proxy-fqdn capsule.example.com",
"scp /root/ capsule.example.com -certs.tar root@ capsule.example.com :/root/ capsule.example.com -certs.tar",
"satellite-maintain packages install puppetserver",
"mkdir -p /etc/puppetlabs/puppet/ssl/certs/ /etc/puppetlabs/puppet/ssl/private_keys/ /etc/puppetlabs/puppet/ssl/public_keys/",
"scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/certs/ca.pem /etc/puppetlabs/puppet/ssl/certs/ca.pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem",
"chown -R puppet:puppet /etc/puppetlabs/puppet/ssl/",
"restorecon -Rv /etc/puppetlabs/puppet/ssl/",
"--certs-cname \" loadbalancer.example.com \" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-puppetca \"false\" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server-ca \"false\"",
"satellite-installer --scenario capsule --certs-cname \" loadbalancer.example.com \" --certs-tar-file \" capsule.example.com-certs.tar \" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --foreman-proxy-puppetca \"false\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server-ca \"false\"",
"mkdir /root/capsule_cert",
"openssl genrsa -out /root/capsule_cert/capsule_cert_key.pem 4096",
"[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name x509_extensions = usr_cert prompt = no [ req_distinguished_name ] commonName = capsule.example.com 1 [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [alt_names] 2 DNS.1 = loadbalancer.example.com DNS.2 = capsule.example.com",
"[req_distinguished_name] CN = capsule.example.com countryName = My_Country_Name 1 stateOrProvinceName = My_State_Or_Province_Name 2 localityName = My_Locality_Name 3 organizationName = My_Organization_Or_Company_Name organizationalUnitName = My_Organizational_Unit_Name 4",
"openssl req -new -key /root/capsule_cert/capsule_cert_key.pem \\ 1 -config /root/capsule_cert/openssl.cnf \\ 2 -out /root/capsule_cert/capsule_cert_csr.pem 3",
"katello-certs-check -c /root/capsule_cert/capsule_cert.pem \\ 1 -k /root/capsule_cert/capsule_cert_key.pem \\ 2 -b /root/capsule_cert/ca_cert_bundle.pem 3",
"--foreman-proxy-cname loadbalancer.example.com",
"capsule-certs-generate --certs-tar /root/capsule_cert/capsule-ca.tar --foreman-proxy-cname loadbalancer.example.com --foreman-proxy-fqdn capsule-ca.example.com --server-ca-cert /root/capsule_cert/ca_cert_bundle.pem --server-cert /root/capsule_cert/capsule-ca.pem --server-key /root/capsule_cert/capsule-ca.pem",
"--enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-puppetca \"true\" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server-ca \"true\"",
"satellite-installer --scenario capsule --certs-cname \" loadbalancer.example.com \" --certs-tar-file \" certs.tgz \" --enable-foreman-proxy-plugin-remote-execution-script --enable-puppet --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-oauth-consumer-key \"oauth key\" --foreman-proxy-oauth-consumer-secret \"oauth secret\" --foreman-proxy-puppetca \"true\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule-ca.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server true --puppet-server-ca \"true\"",
"systemctl stop puppetserver",
"puppetserver ca generate --ca-client --certname capsule.example.com --subject-alt-names loadbalancer.example.com",
"systemctl start puppetserver",
"--foreman-proxy-cname loadbalancer.example.com",
"capsule-certs-generate --certs-tar /root/capsule_cert/capsule.tar --foreman-proxy-cname loadbalancer.example.com --foreman-proxy-fqdn capsule.example.com --server-ca-cert /root/capsule_cert/ca_cert_bundle.pem --server-cert /root/capsule_cert/capsule.pem --server-key /root/capsule_cert/capsule.pem",
"scp /root/ capsule.example.com -certs.tar root@ capsule.example.com : capsule.example.com -certs.tar",
"satellite-maintain packages install puppetserver",
"mkdir -p /etc/puppetlabs/puppet/ssl/certs/ /etc/puppetlabs/puppet/ssl/private_keys/ /etc/puppetlabs/puppet/ssl/public_keys/",
"scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/certs/ca.pem /etc/puppetlabs/puppet/ssl/certs/ca.pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem",
"chown -R puppet:puppet /etc/puppetlabs/puppet/ssl/",
"restorecon -Rv /etc/puppetlabs/puppet/ssl/",
"--certs-cname \" loadbalancer.example.com \" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-puppetca \"false\" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server-ca \"false\"",
"satellite-installer --scenario capsule --certs-cname \" loadbalancer.example.com \" --certs-tar-file \" capsule.example.com-certs.tar \" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --foreman-proxy-puppetca \"false\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server-ca \"false\"",
"satellite-installer --foreman-proxy-registration true --foreman-proxy-templates true",
"satellite-installer --foreman-proxy-registration-url \"https:// loadbalancer.example.com :9090\" --foreman-proxy-template-url \"http:// loadbalancer.example.com :8000\"",
"dnf install haproxy",
"dnf install policycoreutils-python-utils",
"semanage boolean --modify --on haproxy_connect_any",
"systemctl enable --now haproxy",
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \"",
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \" --insecure true",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"] }}'",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"], \"insecure\": true }}'",
"/usr/libexec/platform-python bootstrap.py --activationkey=\" My_Activation_Key \" --enablerepos=satellite-client-6-for-rhel-8-<arch>-rpms \\ 1 --force \\ 2 --hostgroup=\" My_Host_Group \" --location=\" My_Location \" --login= admin --organization=\" My_Organization \" --puppet-ca-port 8141 \\ 3 --server loadbalancer.example.com",
"python bootstrap.py --login= admin --activationkey=\" My_Activation_Key \" --enablerepos=rhel-7-server-satellite-client-6-rpms --force \\ 1 --hostgroup=\" My_Host_Group \" --location=\" My_Location \" --organization=\" My_Organization \" --puppet-ca-port 8141 \\ 2 --server loadbalancer.example.com",
"Foreman proxy to which reports should be uploaded :server: ' loadbalancer.example.com ' :port: 9090",
"Foreman proxy to which reports should be uploaded :server: ' loadbalancer.example.com ' :port: 9090",
"dnf module list --enabled",
"dnf module list --enabled",
"dnf module reset ruby",
"dnf module list --enabled",
"dnf module reset postgresql",
"dnf module enable satellite-capsule:el8",
"dnf install postgresql-upgrade",
"postgresql-setup --upgrade"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html-single/configuring_capsules_with_a_load_balancer/index |
Chapter 2. Eclipse Temurin features | Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 8 release of Eclipse Temurin includes, see OpenJDK 8u422 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements that the Eclipse Temurin 8.0.422 release provides: System properties for controlling the keep-alive behavior of HTTPURLConnection OpenJDK 8.0.422 includes the following new system properties that you can use to control the keep-alive behavior of HTTPURLConnection in situations where the server does not specify a keep-alive time: The http.keepAlive.time.server property controls the number of seconds before an idle connection to a server is closed. The http.keepAlive.time.proxy property controls the number of seconds before an idle connection to a proxy is closed. Note If the server or proxy specifies a keep-alive time in a Keep-Alive response header, this keep-alive time takes precedence over any value that you specify for the http.keepAlive.time.server or http.keepAlive.time.proxy property. See JDK-8278067 (JDK Bug System) . GlobalSign R46 and E46 root certificates added In OpenJDK 8.0.422, the cacerts truststore includes two GlobalSign TLS root certificates: Certificate 1 Name: GlobalSign Alias name: globalsignr46 Distinguished name: CN=GlobalSign Root R46, O=GlobalSign nv-sa, C=BE Certificate 2 Name: GlobalSign Alias name: globalsigne46 Distinguished name: CN=GlobalSign Root E46, O=GlobalSign nv-sa, C=BE See JDK-8316138 (JDK Bug System) . Revised on 2024-08-02 13:17:59 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.422_release_notes/openjdk-temurin-features-8.0.422_openjdk |
8.43. device-mapper-persistent-data | 8.43. device-mapper-persistent-data 8.43.1. RHBA-2014:1409 - device-mapper-persistent-data bug fix and enhancement update Updated device-mapper-persistent-data packages that fix one bug and add two enhancements are now available. The device-mapper-persistent-data packages provide device-mapper thin provisioning (thinp) tools. This update adds thin provisioning tools to support the dm-cache, as well as the dm-era functionality for change tracking to the device-mapper-persistent-data packages in Red Hat Enterprise Linux 6 as a Technology Preview. (BZ# 1038236 , BZ# 1084081 ) More information about Red Hat Technology Previews is available here: https://access.redhat.com/support/offerings/techpreview/ In addition, this update fixes the following bug: Bug Fix BZ# 1035990 The thin_dump utility as well as other persistent-data tools supported only 512 B block sizes on accessed devices. Consequently, persistent-data tools could perform I/O to misaligned buffers. The underlying source code has been updated to support 4 KB block size, thus fixing this bug. This update adds thin provisioning tools to support the dm-cache, as well as the dm-era functionality for change tracking to the device-mapper-persistent-data packages in Red Hat Enterprise Linux 6 as a Technology Preview. (BZ#1038236, BZ#1084081) More information about Red Hat Technology Previews is available here: https://access.redhat.com/support/offerings/techpreview/ Users of device-mapper-persistent-data are advised to upgrade to these updated packages, which fix this bug and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/device-mapper-persistent-data |
Chapter 5. Customer privacy | Chapter 5. Customer privacy Various Microsoft products have a feature that reports usage statistics, analytics, and various other metrics to Microsoft over the network. Microsoft calls this Telemetry. Red Hat is disabling telemetry because we do not recommend sending customer data to anyone without explicit permission. | null | https://docs.redhat.com/en/documentation/net/6.0/html/release_notes_for_.net_6.0_rpm_packages/customer-privacy_release-notes-for-dotnet-rpms |
Chapter 2. Installing a user-provisioned cluster on bare metal | Chapter 2. Installing a user-provisioned cluster on bare metal In OpenShift Container Platform 4.14, you can install a cluster on bare metal infrastructure that you provision. Important While you might be able to follow this procedure to deploy a cluster on virtualized or cloud environments, you must be aware of additional considerations for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in such an environment. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. Additional resources See Installing a user-provisioned bare metal cluster on a restricted network for more information about performing a restricted network installation on bare metal infrastructure that you provision. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Note As an exception, you can run zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. Running one compute machine is not supported. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Configuring a three-node cluster for details about deploying three-node clusters in bare metal environments. See Approving the certificate signing requests for your machines for more information about approving cluster certificate signing requests after installation. 2.3.4. Requirements for baremetal clusters on vSphere Ensure you enable the disk.EnableUUID parameter on all virtual machines in your cluster. Additional resources See Installing RHCOS and starting the OpenShift Container Platform bootstrap process for details on setting the disk.EnableUUID parameter's value to TRUE on VMware vSphere for user-provisioned infrastructure. 2.3.5. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.5.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 2.3.5.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 2.3.6. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.6.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. Additional resources Validating DNS resolution for user-provisioned infrastructure 2.3.7. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.7.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. Additional resources Requirements for a cluster with user-provisioned infrastructure Installing RHCOS and starting the OpenShift Container Platform bootstrap process Setting the cluster node hostnames through DHCP Advanced RHCOS installation configuration Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Validating DNS resolution for user-provisioned infrastructure Load balancing requirements for user-provisioned infrastructure 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. Additional resources User-provisioned DNS requirements Load balancing requirements for user-provisioned infrastructure 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. Additional resources Verifying node health 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.9. Manually creating the installation configuration file Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for bare metal 2.9.1. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources See Load balancing requirements for user-provisioned infrastructure for more information on the API and application ingress load balancing requirements. See Enabling cluster capabilities for more information on enabling cluster capabilities that were disabled prior to installation. See Optional cluster capabilities in OpenShift Container Platform 4.14 for more information about the features provided by each capability. 2.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note For bare metal installations, if you do not assign node IP addresses from the range that is specified in the networking.machineNetwork[].cidr field in the install-config.yaml file, you must include them in the proxy.noProxy field. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources See Recovering from expired control plane certificates for more information about recovering kubelet certificates. 2.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 2.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.11.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.14-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.11.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 2.11.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 2.11.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 2.11.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 2.11.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 2.11.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 2.11.3.4. Default console configuration Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.14 boot image use a default console that is meant to accomodate most virtualized and bare metal setups. Different cloud and virtualization platforms may use different default settings depending on the chosen architecture. Bare metal installations use the kernel default settings which typically means the graphical console is the primary console and the serial console is disabled. The default consoles may not match your specific hardware configuration or you might have specific needs that require you to adjust the default console. For example: You want to access the emergency shell on the console for debugging purposes. Your cloud platform does not provide interactive access to the graphical console, but provides a serial console. You want to enable multiple consoles. Console configuration is inherited from the boot image. This means that new nodes in existing clusters are unaffected by changes to the default console. You can configure the console for bare metal installations in the following ways: Using coreos-installer manually on the command line. Using the coreos-installer iso customize or coreos-installer pxe customize subcommands with the --dest-console option to create a custom image that automates the process. Note For advanced customization, perform console configuration using the coreos-installer iso or coreos-installer pxe subcommands, and not kernel arguments. 2.11.3.5. Enabling the serial console for PXE and ISO installations By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is written to the graphical console. You can enable the serial console for an ISO installation and reconfigure the bootloader so that output is sent to both the serial console and the graphical console. Procedure Boot the ISO installer. Run the coreos-installer command to install the system, adding the --console option once to specify the graphical console, and a second time to specify the serial console: USD coreos-installer install \ --console=tty0 \ 1 --console=ttyS0,<options> \ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> 1 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 2 The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation. Reboot into the installed system. Note A similar outcome can be obtained by using the coreos-installer install --append-karg option, and specifying the console with console= . However, this will only set the console for the kernel and not the bootloader. To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation procedure. 2.11.3.6. Customizing a live RHCOS ISO or PXE install You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file directly into the image. This creates a customized image that you can use to provision your system. For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your customizations. The customize subcommand is a general purpose tool that can embed other types of customizations as well. The following tasks are examples of some of the more common customizations: Inject custom CA certificates for when corporate security policy requires their use. Configure network settings without the need for kernel arguments. Embed arbitrary preinstall and post-install scripts or binaries. 2.11.3.7. Customizing a live RHCOS ISO image You can customize a live RHCOS ISO image directly with the coreos-installer iso customize subcommand. When you boot the ISO image, the customizations are applied automatically. You can use this feature to configure the ISO image to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file, and then run the following command to inject the Ignition config directly into the ISO image: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2 1 The Ignition config file that is generated from the openshift-installer installation program. 2 When you specify this option, the ISO image automatically runs an installation. Otherwise, the image remains configured for installation, but does not install automatically unless you specify the coreos.inst.install_dev kernel argument. Optional: To remove the ISO image customizations and return the image to its pristine state, run: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now re-customize the live ISO image or use it in its pristine state. Applying your customizations affects every subsequent boot of RHCOS. 2.11.3.7.1. Modifying a live install ISO image to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image to enable the serial console to receive output: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the ISO image automatically runs the installation program which will fail unless you also specify the coreos.inst.install_dev kernel argument. Note The --dest-console option affects the installed system and not the live ISO system. To modify the console for a live ISO system, use the --live-karg-append option and specify the console with console= . Your customizations are applied and affect every subsequent boot of the ISO image. Optional: To remove the ISO image customizations and return the image to its original state, run the following command: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now recustomize the live ISO image or use it in its original state. 2.11.3.7.2. Modifying a live install ISO image to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image for use with a custom CA: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 2.11.3.7.3. Modifying a live install ISO image with customized network settings You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with your configured networking: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection Network settings are applied to the live system and are carried over to the destination system. 2.11.3.8. Customizing a live RHCOS PXE environment You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize subcommand. When you boot the PXE environment, the customizations are applied automatically. You can use this feature to configure the PXE environment to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new initramfs file that contains the customizations from your Ignition config: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3 1 The Ignition config file that is generated from openshift-installer . 2 When you specify this option, the PXE environment automatically runs an install. Otherwise, the image remains configured for installing, but does not do so automatically unless you specify the coreos.inst.install_dev kernel argument. 3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Applying your customizations affects every subsequent boot of RHCOS. 2.11.3.8.1. Modifying a live install PXE environment to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new customized initramfs file that enables the serial console to receive output: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the PXE environment automatically runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel argument. 5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Your customizations are applied and affect every subsequent boot of the PXE environment. 2.11.3.8.2. Modifying a live install PXE environment to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file for use with a custom CA: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --ignition-ca cert.pem \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 2.11.3.8.3. Modifying a live install PXE environment with customized network settings You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file that contains your configured networking: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Network settings are applied to the live system and are carried over to the destination system. 2.11.3.9. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.11.3.9.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.11.3.9.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 2.9. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 2.11.3.9.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 2.10. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 2.11.4. Enabling multipathing with kernel arguments on RHCOS RHCOS supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container Platform 4.8 or later. While postinstallation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. Important On IBM Z(R) and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE . The following procedure enables multipath at installation time and appends kernel arguments to the coreos-installer install command so that the installed system itself will use multipath beginning from the first boot. Note OpenShift Container Platform does not support enabling multipathing as a day-2 activity on nodes that have been upgraded from 4.6 or earlier. Prerequisites You have created the Ignition config files for your cluster. You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap process . Procedure To enable multipath and start the multipathd daemon, run the following command on the installation host: USD mpathconf --enable && systemctl start multipathd.service Optional: If booting the PXE or ISO, you can instead enable multipath by adding rd.multipath=default from the kernel command line. Append the kernel arguments by invoking the coreos-installer program: If there is only one multipath device connected to the machine, it should be available at path /dev/mapper/mpatha . For example: USD coreos-installer install /dev/mapper/mpatha \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the path of the single multipathed device. If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using /dev/mapper/mpatha , it is recommended to use the World Wide Name (WWN) symlink available in /dev/disk/by-id . For example: USD coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the WWN ID of the target multipathed device. For example, 0xx194e957fcedb4841 . This symlink can also be used as the coreos.inst.install_dev kernel argument when using special coreos.inst.* arguments to direct the live installer. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process". Reboot into the installed system. Check that the kernel arguments worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. 2.11.4.1. Enabling multipathing on secondary disks RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to enable multipathing for the secondary disk at installation time. Prerequisites You have read the section Disk partitioning . You have read Enabling multipathing with kernel arguments on RHCOS . You have installed the Butane utility. Procedure Create a Butane config with information similar to the following: Example multipath-config.bu variant: openshift version: 4.14.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target 1 The configuration must be set before launching the multipath daemon. 2 Starts the mpathconf utility. 3 This field must be set to the value true . 4 Creates the filesystem and directory /var/lib/containers . 5 The device must be mounted before starting any nodes. 6 Mounts the device to the /var/lib/containers mount point. This location cannot be a symlink. Create the Ignition configuration by running the following command: USD butane --pretty --strict multipath-config.bu > multipath-config.ign Continue with the rest of the first boot RHCOS installation process. Important Do not add the rd.multipath or root kernel arguments on the command-line during installation unless the primary disk is also multipathed. Additional resources See Installing RHCOS and starting the OpenShift Container Platform bootstrap process for more information on using special coreos.inst.* arguments to direct the live installer. 2.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. Additional resources See Monitoring installation progress for more information about monitoring the installation logs and retrieving diagnostic data if installation issues arise. 2.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Configure the Operators that are not available. Additional resources See Gathering logs from a failed installation for details about gathering data in the event of a failed OpenShift Container Platform installation. See Troubleshooting Operator issues for steps to check Operator pod health across the cluster and gather Operator logs for diagnosis. 2.15.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.15.2.1. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.15.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.15.2.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 2.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 2.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.18. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"variant: openshift version: 4.14.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target",
"butane --pretty --strict multipath-config.bu > multipath-config.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_bare_metal/installing-bare-metal |
Chapter 129. KafkaBridgeConsumerSpec schema reference | Chapter 129. KafkaBridgeConsumerSpec schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeConsumerSpec schema properties Configures consumer options for the Kafka Bridge as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Properties with the following prefixes cannot be set: bootstrap.servers group.id sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka Bridge, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Example Kafka Bridge consumer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... consumer: config: auto.offset.reset: earliest enable.auto.commit: true # ... Important The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge deployment might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. 129.1. KafkaBridgeConsumerSpec schema properties Property Property type Description config map The Kafka consumer configuration used for consumer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # consumer: config: auto.offset.reset: earliest enable.auto.commit: true #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkabridgeconsumerspec-reference |
7.4. amanda | 7.4. amanda 7.4.1. RHBA-2013:0427 - amanda bug fix update Updated amanda packages that fix one bug are now available for Red Hat Enterprise Linux 6. AMANDA, the Advanced Maryland Automatic Network Disk Archiver, is a backup system that allows the administrator of a LAN to set up a single master backup server to back up multiple hosts to one or more tape drives or disk files. Bug Fix BZ#752096 Previously, the amandad daemon, which is required for successful running of AMANDA, was located in the amanda-client package; however, this package was not required during installation of the amanda-server package. Consequently, AMANDA did not work properly. The amanda-client package has been added to the amanda-server dependencies and AMANDA works correctly now. All AMANDA users are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/amanda |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/making-open-source-more-inclusive |
Chapter 1. About this tutorial | Chapter 1. About this tutorial The tutorial includes the following steps: Deploy a MySQL database server with a simple example database to OpenShift. Apply a custom resource in AMQ Streams to automatically build a Kafka Connect container image that includes the Debezium MySQL connector plug-in. Create the Debezium MySQL connector resource to capture changes in the database. Verify the connector deployment. View the change events that the connector emits to a Kafka topic from the database. Prerequisites You are familiar with OpenShift and AMQ Streams. You have access to an OpenShift cluster on which the cluster Operator is installed. The AMQ Streams Operator is running. An Apache Kafka cluster is deployed as documented in Deploying and Managing AMQ Streams on OpenShift . You have a Red Hat Integration license. You know how to use OpenShift administration tools. The OpenShift oc CLI client is installed or you have access to the OpenShift Container Platform web console. Depending on how you intend to store the Kafka Connect build image, you must either have permission to access a container registry, or you must create an ImageStream resource on OpenShift: To store the build image in an image registry, such as Red Hat Quay.io or Docker Hub An account and permissions to create and manage images in the registry. To store the build image as a native OpenShift ImageStream An ImageStream resource is deployed to the cluster for storing new container images. You must explicitly create an ImageStream for the cluster. ImageStreams are not available by default. Additional resources: Managing image streams on OpenShift Container Platform . | null | https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/getting_started_with_debezium/about_this_tutorial |
5.7.4.2. Removing Storage | 5.7.4.2. Removing Storage Removing disk space from a system is straightforward, with most of the steps being similar to the installation sequence (except, of course, in reverse): Move any data to be saved off the disk drive Modify the backup schedule so that the disk drive is no longer backed up Update the system configuration Erase the contents of the disk drive Remove the disk drive As you can see, compared to the installation process, there are a few extra steps to take. These steps are discussed in the following sections. 5.7.4.2.1. Moving Data Off the Disk Drive Should there be any data on the disk drive that must be saved, the first thing to do is to determine where the data should go. This decision depends mainly on what is going to be done with the data. For example, if the data is no longer going to be actively used, it should be archived, probably in the same manner as your system backups. This means that now is the time to consider appropriate retention periods for this final backup. Note Keep in mind that, in addition to any data retention guidelines your organization may have, there may also be legal requirements for retaining data for a certain length of time. Therefore, make sure you consult with the department that had been responsible for the data while it was still in use; they should know the appropriate retention period. On the other hand, if the data is still being used, then the data should reside on the system most appropriate for that usage. Of course, if this is the case, perhaps it would be easiest to move the data by reinstalling the disk drive on the new system. If you do this, you should make a full backup of the data before doing so -- people have dropped disk drives full of valuable data (losing everything) while doing nothing more hazardous than walking across a data center. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-storage-dtd-removing |
function::cpu_clock_ms | function::cpu_clock_ms Name function::cpu_clock_ms - Number of milliseconds on the given cpu's clock Synopsis Arguments cpu Which processor's clock to read Description This function returns the number of milliseconds on the given cpu's clock. This is always monotonic comparing on the same cpu, but may have some drift between cpus (within about a jiffy). | [
"cpu_clock_ms:long(cpu:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-cpu-clock-ms |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Make sure you are logged in to the Jira website. Provide feedback by clicking on this link . Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. If you want to be notified about future updates, please make sure you are assigned as Reporter . Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_sap_hana_scale-up_multitarget_system_replication_for_disaster_recovery/feedback_configuring-hana-scale-up-multitarget-system-replication-disaster-recovery |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_installing_and_managing_rhel_for_edge_images/proc_providing-feedback-on-red-hat-documentation_composing-installing-managing-rhel-for-edge-images |
Part II. Notable Bug Fixes | Part II. Notable Bug Fixes This part describes bugs fixed in Red Hat Enterprise Linux 7.4 that have a significant impact on users. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/bug-fixes |
Chapter 7. http | Chapter 7. http 7.1. http:proxies 7.1.1. Description List the HTTP proxies 7.1.2. Syntax http:proxies [options] 7.1.3. Options Name Description --help Display this help message 7.2. http:proxy-add 7.2.1. Description Add a new HTTP proxy 7.2.2. Syntax http:proxy-add [options] url proxyTo 7.2.3. Arguments Name Description url HTTP proxy URL proxyTo HTTP location to proxy on the prefix 7.2.4. Options Name Description --help Display this help message 7.3. http:proxy-remove 7.3.1. Description Remove an existing HTTP proxy 7.3.2. Syntax http:proxy-remove [options] prefix 7.3.3. Arguments Name Description prefix The HTTP proxy prefix 7.3.4. Options Name Description --help Display this help message | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_console_reference/http |
Chapter 28. Developing a Consumer From a WSDL Contract | Chapter 28. Developing a Consumer From a WSDL Contract Abstract One way of creating a consumer is to start from a WSDL contract. The contract defines the operations, messages, and transport details of the service on which a consumer makes requests. The starting point code for the consumer is generated from the WSDL contract. The functionality required by the consumer is added to the generated code. 28.1. Generating the Stub Code Overview The cxf-codegen-plugin Maven plug-in generates the stub code from the WSDL contract. The stub code provides the supporting code that is required to invoke operations on the remote service. For consumers, the cxf-codegen-plugin Maven plug-in generates the following types of code: Stub code - Supporting files for implementing a consumer. Starting point code - Sample code that connects to the remote service and invokes every operation on the remote service. Generating the consumer code To generate consumer code use the cxf-codegen-plugin Maven plug-in. Example 28.1, "Consumer Code Generation" shows how to use the code generator to generate consumer code. Example 28.1. Consumer Code Generation Where outputDir is the location of a directory where the generated files are placed and wsdl specifies the WSDL contract's location. The -client option generates starting point code for the consumer's main() method. For a complete list of the arguments available for the cxf-codegen-plugin Maven plug-in see Section 44.2, "cxf-codegen-plugin" . Generated code The code generation plug-in generates the following Java packages for the contract shown in Example 26.1, "HelloWorld WSDL Contract" : org.apache.hello_world_soap_http - This package is generated from the http://apache.org/hello_world_soap_http target namespace. All of the WSDL entities defined in this namespace (for example, the Greeter port type and the SOAPService service) map to Java classes this Java package. org.apache.hello_world_soap_http.types - This package is generated from the http://apache.org/hello_world_soap_http/types target namespace. All of the XML types defined in this namespace (that is, everything defined in the wsdl:types element of the HelloWorld contract) map to Java classes in this Java package. The stub files generated by the cxf-codegen-plugin Maven plug-in fall into the following categories: Classes representing WSDL entities in the org.apache.hello_world_soap_http package. The following classes are generated to represent WSDL entities: Greeter - A Java interface that represents the Greeter wsdl:portType element. In JAX-WS terminology, this Java interface is the service endpoint interface (SEI). SOAPService - A Java service class (extending javax.xml.ws.Service ) that represents the SOAPService wsdl:service element. PingMeFault - A Java exception class (extending java.lang.Exception ) that represents the pingMeFault wsdl:fault element. Classes representing XML types in the org.objectweb.hello_world_soap_http.types package. In the HelloWorld example, the only generated types are the various wrappers for the request and reply messages. Some of these data types are useful for the asynchronous invocation model. 28.2. Implementing a Consumer Overview To implement a consumer when starting from a WSDL contract, you must use the following stubs: Service class SEI Using these stubs, the consumer code instantiates a service proxy to make requests on the remote service. It also implements the consumer's business logic. Generated service class Example 28.2, "Outline of a Generated Service Class" shows the typical outline of a generated service class, ServiceName _Service [2] , which extends the javax.xml.ws.Service base class. Example 28.2. Outline of a Generated Service Class The ServiceName class in Example 28.2, "Outline of a Generated Service Class" defines the following methods: ServiceName (URL wsdlLocation, QName serviceName) - Constructs a service object based on the data in the wsdl:service element with the QName ServiceName service in the WSDL contract that is obtainable from wsdlLocation . ServiceName () - The default constructor. It constructs a service object based on the service name and the WSDL contract that were provided at the time the stub code was generated (for example, when running the wsdl2java tool). Using this constructor presupposes that the WSDL contract remains available at a specified location. ServiceName (Bus bus) - (CXF specific) An additional constructor that enables you to specify the Bus instance used to configure the Service. This can be useful in the context of a multi-threaded application, where multiple Bus instances can be associated with different threads. This constructor provides a simple way of ensuring that the Bus that you specify is the one that is used with this Service. Only available if you specify the -fe cxf option when invoking the wsdl2java tool. get PortName () - Returns a proxy for the endpoint defined by the wsdl:port element with the name attribute equal to PortName . A getter method is generated for every wsdl:port element defined by the ServiceName service. A wsdl:service element that contains multiple endpoint definitions results in a generated service class with multiple get PortName () methods. Service endpoint interface For every interface defined in the original WSDL contract, you can generate a corresponding SEI. A service endpoint interface is the Java mapping of a wsdl:portType element. Each operation defined in the original wsdl:portType element maps to a corresponding method in the SEI. The operation's parameters are mapped as follows: . The input parameters are mapped to method arguments. The first output parameter is mapped to a return value. If there is more than one output parameter, the second and subsequent output parameters map to method arguments (moreover, the values of these arguments must be passed using Holder types). For example, Example 28.3, "The Greeter Service Endpoint Interface" shows the Greeter SEI, which is generated from the wsdl:portType element defined in Example 26.1, "HelloWorld WSDL Contract" . For simplicity, Example 28.3, "The Greeter Service Endpoint Interface" omits the standard JAXB and JAX-WS annotations. Example 28.3. The Greeter Service Endpoint Interface Consumer main function Example 28.4, "Consumer Implementation Code" shows the code that implements the HelloWorld consumer. The consumer connects to the SoapPort port on the SOAPService service and then proceeds to invoke each of the operations supported by the Greeter port type. Example 28.4. Consumer Implementation Code The Client.main() method from Example 28.4, "Consumer Implementation Code" proceeds as follows: Provided that the Apache CXF runtime classes are on your classpath, the runtime is implicitly initialized. There is no need to call a special function to initialize Apache CXF. The consumer expects a single string argument that gives the location of the WSDL contract for HelloWorld. The WSDL contract's location is stored in wsdlURL . You create a service object using the constructor that requires the WSDL contract's location and service name. Call the appropriate get PortName () method to obtain an instance of the required port. In this case, the SOAPService service supports only the SoapPort port, which implements the Greeter service endpoint interface. The consumer invokes each of the methods supported by the Greeter service endpoint interface. In the case of the pingMe() method, the example code shows how to catch the PingMeFault fault exception. Client proxy generated with -fe cxf option If you generate your client proxy by specifying the -fe cxf option in wsdl2java (thereby selecting the cxf frontend), the generated client proxy code is better integrated with Java 7. In this case, when you call a get ServiceName Port() method, you get back a type that is a sub-interface of the SEI and implements the following additional interfaces: java.lang.AutoCloseable javax.xml.ws.BindingProvider (JAX-WS 2.0) org.apache.cxf.endpoint.Client To see how this simplifies working with a client proxy, consider the following Java code sample, written using a standard JAX-WS proxy object: And compare the preceding code with the following equivalent code sample, written using code generated by the cxf frontend: [2] If the name attribute of the wsdl:service element ends in Service the _Service is not used. | [
"<plugin> <groupId>org.apache.cxf</groupId> <artifactId>cxf-codegen-plugin</artifactId> <version>USD{cxf.version}</version> <executions> <execution> <id>generate-sources</id> <phase>generate-sources</phase> <configuration> <sourceRoot> outputDir </sourceRoot> <wsdlOptions> <wsdlOption> <wsdl> wsdl </wsdl> <extraargs> <extraarg>-client</extraarg> </extraargs> </wsdlOption> </wsdlOptions> </configuration> <goals> <goal>wsdl2java</goal> </goals> </execution> </executions> </plugin>",
"@WebServiceClient(name=\"...\" targetNamespace=\"...\" wsdlLocation=\"...\") public class ServiceName extends javax.xml.ws.Service { public ServiceName (URL wsdlLocation, QName serviceName) { } public ServiceName () { } // Available only if you specify '-fe cxf' option in wsdl2java public ServiceName (Bus bus) { } @WebEndpoint(name=\"...\") public SEI get PortName () { } . . . }",
"package org.apache.hello_world_soap_http; public interface Greeter { public String sayHi(); public String greetMe(String requestType); public void greetMeOneWay(String requestType); public void pingMe() throws PingMeFault; }",
"package demo.hw.client; import java.io.File; import java.net.URL; import javax.xml.namespace.QName; import org.apache.hello_world_soap_http.Greeter; import org.apache.hello_world_soap_http.PingMeFault; import org.apache.hello_world_soap_http.SOAPService; public final class Client { private static final QName SERVICE_NAME = new QName(\"http://apache.org/hello_world_soap_http\", \"SOAPService\"); private Client() { } public static void main(String args[]) throws Exception { if (args.length == 0) { System.out.println(\"please specify wsdl\"); System.exit(1); } URL wsdlURL; File wsdlFile = new File(args[0]); if (wsdlFile.exists()) { wsdlURL = wsdlFile.toURL(); } else { wsdlURL = new URL(args[0]); } System.out.println(wsdlURL); SOAPService ss = new SOAPService(wsdlURL,SERVICE_NAME); Greeter port = ss.getSoapPort(); String resp; System.out.println(\"Invoking sayHi...\"); resp = port.sayHi(); System.out.println(\"Server responded with: \" + resp); System.out.println(); System.out.println(\"Invoking greetMe...\"); resp = port.greetMe(System.getProperty(\"user.name\")); System.out.println(\"Server responded with: \" + resp); System.out.println(); System.out.println(\"Invoking greetMeOneWay...\"); port.greetMeOneWay(System.getProperty(\"user.name\")); System.out.println(\"No response from server as method is OneWay\"); System.out.println(); try { System.out.println(\"Invoking pingMe, expecting exception...\"); port.pingMe(); } catch (PingMeFault ex) { System.out.println(\"Expected exception: PingMeFault has occurred.\"); System.out.println(ex.toString()); } System.exit(0); } }",
"// Programming with standard JAX-WS proxy object // ( ServiceName PortType port = service.get ServiceName Port(); ((BindingProvider)port).getRequestContext() .put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, address); port. serviceMethod (...); ((Closeable)port).close();",
"// Programming with proxy generated using '-fe cxf' option // try ( ServiceName PortTypeProxy port = service.get ServiceName Port()) { port.getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, address); port. serviceMethod (...); }"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxwsconsumerdevwsdlfirst |
Chapter 10. Disconnected environment | Chapter 10. Disconnected environment Disconnected environment is a network restricted environment where the Operator Lifecycle Manager (OLM) cannot access the default Operator Hub and image registries, which require internet connectivity. Red Hat supports deployment of OpenShift Data Foundation in disconnected environments where you have installed OpenShift Container Platform in restricted networks. To install OpenShift Data Foundation in a disconnected environment, see Using Operator Lifecycle Manager on restricted networks of the Operators guide in OpenShift Container Platform documentation. Note When you install OpenShift Data Foundation in a restricted network environment, apply a custom Network Time Protocol (NTP) configuration to the nodes, because by default, internet connectivity is assumed in OpenShift Container Platform and chronyd is configured to use the *.rhel.pool.ntp.org servers. For more information, see the Red Hat Knowledgebase solution A newly deployed OCS 4 cluster status shows as "Degraded", Why? and Configuring chrony time service of the Installing guide in OpenShift Container Platform documentation. Red Hat OpenShift Data Foundation version 4.12 introduced the Agent-based Installer for disconnected environment deployment. The Agent-based Installer allows you to use a mirror registry for disconnected installations. For more information, see Preparing to install with Agent-based Installer . Packages to include for OpenShift Data Foundation When you prune the redhat-operator index image, include the following list of packages for the OpenShift Data Foundation deployment: ocs-operator odf-operator mcg-operator odf-csi-addons-operator odr-cluster-operator odr-hub-operator Optional: local-storage-operator Only for local storage deployments. Optional: odf-multicluster-orchestrator Only for Regional Disaster Recovery (Regional-DR) configuration. Important Name the CatalogSource as redhat-operators . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/planning_your_deployment/disconnected-environment_rhodf |
Providing feedback on Red Hat Directory Server | Providing feedback on Red Hat Directory Server We appreciate your input on our documentation and products. Please let us know how we could make it better. To do so: For submitting feedback on the Red Hat Directory Server documentation through Jira (account required): Go to the Red Hat Issue Tracker . Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. For submitting feedback on the Red Hat Directory Server product through Jira (account required): Go to the Red Hat Issue Tracker . On the Create Issue page, click . Fill in the Summary field. Select the component in the Component field. Fill in the Description field including: The version number of the selected component. Steps to reproduce the problem or your suggestion for improvement. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/tuning_the_performance_of_red_hat_directory_server/proc_providing-feedback-on-red-hat-documentation_tuning-the-performance-of-rhds |
Part IV. Running Red Hat JBoss Data Grid | Part IV. Running Red Hat JBoss Data Grid | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/part-running_red_hat_jboss_data_grid |
Appendix B. Using Red Hat Maven repositories | Appendix B. Using Red Hat Maven repositories This section describes how to use Red Hat-provided Maven repositories in your software. B.1. Using the online repository Red Hat maintains a central Maven repository for use with your Maven-based projects. For more information, see the repository welcome page . There are two ways to configure Maven to use the Red Hat repository: Add the repository to your Maven settings Add the repository to your POM file Adding the repository to your Maven settings This method of configuration applies to all Maven projects owned by your user, as long as your POM file does not override the repository configuration and the included profile is enabled. Procedure Locate the Maven settings.xml file. It is usually inside the .m2 directory in the user home directory. If the file does not exist, use a text editor to create it. On Linux or UNIX: /home/ <username> /.m2/settings.xml On Windows: C:\Users\<username>\.m2\settings.xml Add a new profile containing the Red Hat repository to the profiles element of the settings.xml file, as in the following example: Example: A Maven settings.xml file containing the Red Hat repository <settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings> For more information about Maven configuration, see the Maven settings reference . Adding the repository to your POM file To configure a repository directly in your project, add a new entry to the repositories element of your POM file, as in the following example: Example: A Maven pom.xml file containing the Red Hat repository <project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project> For more information about POM file configuration, see the Maven POM reference . B.2. Using a local repository Red Hat provides file-based Maven repositories for some of its components. These are delivered as downloadable archives that you can extract to your local filesystem. To configure Maven to use a locally extracted repository, apply the following XML in your Maven settings or POM file: <repository> <id>red-hat-local</id> <url>USD{repository-url}</url> </repository> USD{repository-url} must be a file URL containing the local filesystem path of the extracted repository. Table B.1. Example URLs for local Maven repositories Operating system Filesystem path URL Linux or UNIX /home/alice/maven-repository file:/home/alice/maven-repository Windows C:\repos\red-hat file:C:\repos\red-hat | [
"/home/ <username> /.m2/settings.xml",
"C:\\Users\\<username>\\.m2\\settings.xml",
"<settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings>",
"<project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project>",
"<repository> <id>red-hat-local</id> <url>USD{repository-url}</url> </repository>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_jms/2.4/html/using_qpid_jms/using_red_hat_maven_repositories |
Chapter 12. Handling a node failure | Chapter 12. Handling a node failure As a storage administrator, you can experience a whole node failing within the storage cluster, and handling a node failure is similar to handling a disk failure. With a node failure, instead of Ceph recovering placement groups (PGs) for only one disk, all PGs on the disks within that node must be recovered. Ceph will detect that the OSDs are all down and automatically start the recovery process, known as self-healing. There are three node failure scenarios. Replacing the node by using the root and Ceph OSD disks from the failed node. Replacing the node by reinstalling the operating system and using the Ceph OSD disks from the failed node. Replacing the node by reinstalling the operating system and using all new Ceph OSD disks. For a high-level workflow for each node replacement scenario, see link:https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6/html-single/operations_guide/#ops_workflow-for replacing-a-node[ Workflow for replacing a node ]. Prerequisites A running Red Hat Ceph Storage cluster. A failed node. 12.1. Considerations before adding or removing a node One of the outstanding features of Ceph is the ability to add or remove Ceph OSD nodes at run time. This means that you can resize the storage cluster capacity or replace hardware without taking down the storage cluster. The ability to serve Ceph clients while the storage cluster is in a degraded state also has operational benefits. For example, you can add or remove or replace hardware during regular business hours, rather than working overtime or on weekends. However, adding and removing Ceph OSD nodes can have a significant impact on performance. Before you add or remove Ceph OSD nodes, consider the effects on storage cluster performance: Whether you are expanding or reducing the storage cluster capacity, adding or removing Ceph OSD nodes induces backfilling as the storage cluster rebalances. During that rebalancing time period, Ceph uses additional resources, which can impact storage cluster performance. In a production Ceph storage cluster, a Ceph OSD node has a particular hardware configuration that facilitates a particular type of storage strategy. Since a Ceph OSD node is part of a CRUSH hierarchy, the performance impact of adding or removing a node typically affects the performance of pools that use the CRUSH ruleset. Additional Resources See the Red Hat Ceph Storage Storage Strategies Guide for more details. 12.2. Workflow for replacing a node There are three node failure scenarios. Use these high-level workflows for each scenario when replacing a node. Replacing the node by using the root and Ceph OSD disks from the failed node Replacing the node by reinstalling the operating system and using the Ceph OSD disks from the failed node Replacing the node by reinstalling the operating system and using all new Ceph OSD disks Prerequisites A running Red Hat Ceph Storage cluster. A failed node. 12.2.1. Replacing the node by using the root and Ceph OSD disks from the failed node Use the root and Ceph OSD disks from the failed node to replace the node. Procedure Disable backfilling. Syntax Example Replace the node, taking the disks from the old node, and adding them to the new node. Enable backfilling. Syntax Example 12.2.2. Replacing the node by reinstalling the operating system and using the Ceph OSD disks from the failed node Reinstall the operating system and use the Ceph OSD disks from the failed node to replace the node. Procedure Disable backfilling. Syntax Example Create a backup of the Ceph configuration. Syntax Example Replace the node and add the Ceph OSD disks from the failed node. Configure disks as JBOD. Note This should be done by the storage administrator. Install the operating system. For more information about operating system requirements, see Operating system requirements for Red Hat Ceph Storage . For more information about installing the operating system, see the Red Hat Enterprise Linux product documentation . Note This should be done by the system administrator. Restore the Ceph configuration. Syntax Example Add the new node to the storage cluster using the Ceph Orchestrator commands. Ceph daemons are placed automatically on the respective node. For more information, see Adding a Ceph OSD node . Enable backfilling. Syntax Example 12.2.3. Replacing the node by reinstalling the operating system and using all new Ceph OSD disks Reinstall the operating system and use all new Ceph OSD disks to replace the node. Procedure Disable backfilling. Syntax Example Remove all OSDs on the failed node from the storage cluster. For more information, see Removing a Ceph OSD node . Create a backup of the Ceph configuration. Syntax Example Replace the node and add the Ceph OSD disks from the failed node. Configure disks as JBOD. Note This should be done by the storage administrator. Install the operating system. For more information about operating system requirements, see Operating system requirements for Red Hat Ceph Storage . For more information about installing the operating system, see the Red Hat Enterprise Linux product documentation . Note This should be done by the system administrator. Add the new node to the storage cluster using the Ceph Orchestrator commands. Ceph daemons are placed automatically on the respective node. For more information, see Adding a Ceph OSD node . Enable backfilling. Syntax Example 12.3. Performance considerations The following factors typically affect a storage cluster's performance when adding or removing Ceph OSD nodes: Ceph clients place load on the I/O interface to Ceph; that is, the clients place load on a pool. A pool maps to a CRUSH ruleset. The underlying CRUSH hierarchy allows Ceph to place data across failure domains. If the underlying Ceph OSD node involves a pool that is experiencing high client load, the client load could significantly affect recovery time and reduce performance. Because write operations require data replication for durability, write-intensive client loads in particular can increase the time for the storage cluster to recover. Generally, the capacity you are adding or removing affects the storage cluster's time to recover. In addition, the storage density of the node you add or remove might also affect recovery times. For example, a node with 36 OSDs typically takes longer to recover than a node with 12 OSDs. When removing nodes, you MUST ensure that you have sufficient spare capacity so that you will not reach full ratio or near full ratio . If the storage cluster reaches full ratio , Ceph will suspend write operations to prevent data loss. A Ceph OSD node maps to at least one Ceph CRUSH hierarchy, and the hierarchy maps to at least one pool. Each pool that uses a CRUSH ruleset experiences a performance impact when Ceph OSD nodes are added or removed. Replication pools tend to use more network bandwidth to replicate deep copies of the data, whereas erasure coded pools tend to use more CPU to calculate k+m coding chunks. The more copies that exist of the data, the longer it takes for the storage cluster to recover. For example, a larger pool or one that has a greater number of k+m chunks will take longer to recover than a replication pool with fewer copies of the same data. Drives, controllers and network interface cards all have throughput characteristics that might impact the recovery time. Generally, nodes with higher throughput characteristics, such as 10 Gbps and SSDs, recover more quickly than nodes with lower throughput characteristics, such as 1 Gbps and SATA drives. 12.4. Recommendations for adding or removing nodes Red Hat recommends adding or removing one OSD at a time within a node and allowing the storage cluster to recover before proceeding to the OSD. This helps to minimize the impact on storage cluster performance. Note that if a node fails, you might need to change the entire node at once, rather than one OSD at a time. To remove an OSD: Using Removing the OSD daemons using the Ceph Orchestrator . To add an OSD: Using Deploying Ceph OSDs on all available devices . Using Deploying Ceph OSDs using advanced service specification . Using Deploying Ceph OSDs on specific devices and hosts . When adding or removing Ceph OSD nodes, consider that other ongoing processes also affect storage cluster performance. To reduce the impact on client I/O, Red Hat recommends the following: Calculate capacity Before removing a Ceph OSD node, ensure that the storage cluster can backfill the contents of all its OSDs without reaching the full ratio . Reaching the full ratio will cause the storage cluster to refuse write operations. Temporarily disable scrubbing Scrubbing is essential to ensuring the durability of the storage cluster's data; however, it is resource intensive. Before adding or removing a Ceph OSD node, disable scrubbing and deep-scrubbing and let the current scrubbing operations complete before proceeding. Once you have added or removed a Ceph OSD node and the storage cluster has returned to an active+clean state, unset the noscrub and nodeep-scrub settings. Limit backfill and recovery If you have reasonable data durability, there is nothing wrong with operating in a degraded state. For example, you can operate the storage cluster with osd_pool_default_size = 3 and osd_pool_default_min_size = 2 . You can tune the storage cluster for the fastest possible recovery time, but doing so significantly affects Ceph client I/O performance. To maintain the highest Ceph client I/O performance, limit the backfill and recovery operations and allow them to take longer. You can also consider setting the sleep and delay parameters such as, osd_recovery_sleep . Increase the number of placement groups Finally, if you are expanding the size of the storage cluster, you may need to increase the number of placement groups. If you determine that you need to expand the number of placement groups, Red Hat recommends making incremental increases in the number of placement groups. Increasing the number of placement groups by a significant amount will cause a considerable degradation in performance. 12.5. Adding a Ceph OSD node To expand the capacity of the Red Hat Ceph Storage cluster, you can add an OSD node. Prerequisites A running Red Hat Ceph Storage cluster. A provisioned node with a network connection. Procedure Verify that other nodes in the storage cluster can reach the new node by its short host name. Temporarily disable scrubbing: Example Limit the backfill and recovery features: Syntax Example Extract the cluster's public SSH keys to a folder: Syntax Example Copy ceph cluster's public SSH keys to the root user's authorized_keys file on the new host: Syntax Example Add the new node to the CRUSH map: Syntax Example Add an OSD for each disk on the node to the storage cluster. Using Deploying Ceph OSDs on all available devices . Using Deploying Ceph OSDs using advanced service specification . Using Deploying Ceph OSDs on specific devices and hosts . Important When adding an OSD node to a Red Hat Ceph Storage cluster, Red Hat recommends adding one OSD daemon at a time and allowing the cluster to recover to an active+clean state before proceeding to the OSD. Additional Resources See the Setting a Specific Configuration Setting at Runtime section in the Red Hat Ceph Storage Configuration Guide for more details. See Adding a Bucket and Moving a Bucket sections in the Red Hat Ceph Storage Storage Strategies Guide for details on placing the node at an appropriate location in the CRUSH hierarchy. 12.6. Removing a Ceph OSD node To reduce the capacity of a storage cluster, remove an OSD node. Warning Before removing a Ceph OSD node, ensure that the storage cluster can backfill the contents of all OSDs without reaching the full ratio . Reaching the full ratio will cause the storage cluster to refuse write operations. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. Procedure Check the storage cluster's capacity: Syntax Temporarily disable scrubbing: Syntax Limit the backfill and recovery features: Syntax Example Remove each OSD on the node from the storage cluster: Using Removing the OSD daemons using the Ceph Orchestrator . Important When removing an OSD node from the storage cluster, Red Hat recommends removing one OSD at a time within the node and allowing the cluster to recover to an active+clean state before proceeding to remove the OSD. After you remove an OSD, check to verify that the storage cluster is not getting to the near-full ratio : Syntax Repeat this step until all OSDs on the node are removed from the storage cluster. Once all OSDs are removed, remove the host: Using Removing hosts using the Ceph Orchestrator . Additional Resources See the Setting a specific configuration at runtime section in the Red Hat Ceph Storage Configuration Guide for more details. 12.7. Simulating a node failure To simulate a hard node failure, power off the node and reinstall the operating system. Prerequisites A healthy running Red Hat Ceph Storage cluster. Root-level access to all nodes on the storage cluster. Procedure Check the storage cluster's capacity to understand the impact of removing the node: Example Optionally, disable recovery and backfilling: Example Shut down the node. If you are changing the host name, remove the node from CRUSH map: Example Check the status of the storage cluster: Example Reinstall the operating system on the node. Add the new node: Using the Adding hosts using the Ceph Orchestrator . Optionally, enable recovery and backfilling: Example Check Ceph's health: Example Additional Resources See the Red Hat Ceph Storage Installation Guide for more details. | [
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"cp /etc/ceph/ceph.conf / PATH_TO_BACKUP_LOCATION /ceph.conf",
"cp /etc/ceph/ceph.conf /some/backup/location/ceph.conf",
"cp / PATH_TO_BACKUP_LOCATION /ceph.conf /etc/ceph/ceph.conf",
"cp /some/backup/location/ceph.conf /etc/ceph/ceph.conf",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"cp /etc/ceph/ceph.conf / PATH_TO_BACKUP_LOCATION /ceph.conf",
"cp /etc/ceph/ceph.conf /some/backup/location/ceph.conf",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd unset noscrub ceph osd unset nodeep-scrub",
"osd_max_backfills = 1 osd_recovery_max_active = 1 osd_recovery_op_priority = 1",
"ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph tell DAEMON_TYPE .* injectargs -- OPTION_NAME VALUE [-- OPTION_NAME VALUE ]",
"ceph tell osd.* injectargs --osd-max-backfills 1 --osd-recovery-max-active 1 --osd-recovery-op-priority 1",
"ceph cephadm get-pub-key > ~/ PATH",
"ceph cephadm get-pub-key > ~/ceph.pub",
"ssh-copy-id -f -i ~/ PATH root@ HOST_NAME_2",
"ssh-copy-id -f -i ~/ceph.pub root@host02",
"ceph orch host add NODE_NAME IP_ADDRESS",
"ceph orch host add host02 10.10.128.70",
"ceph df rados df ceph osd df",
"ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph tell DAEMON_TYPE .* injectargs -- OPTION_NAME VALUE [-- OPTION_NAME VALUE ]",
"ceph tell osd.* injectargs --osd-max-backfills 1 --osd-recovery-max-active 1 --osd-recovery-op-priority 1",
"ceph -s ceph df",
"ceph df rados df ceph osd df",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd crush rm host03",
"ceph -s",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph -s"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/operations_guide/handling-a-node-failure |
Chapter 3. Providing feedback on OpenShift Container Platform documentation | Chapter 3. Providing feedback on OpenShift Container Platform documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click one of the following links: To create a Jira issue for OpenShift Container Platform To create a Jira issue for OpenShift Virtualization Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Click Create to create the issue. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/about/providing-feedback-on-red-hat-documentation |
Chapter 28. Configuration: Configuring IdM Servers and Replicas | Chapter 28. Configuration: Configuring IdM Servers and Replicas The IdM servers and backend services are configured with default settings that are applicable in most environments. There are some configuration areas where the IdM server configuration can be tweaked to improve security or performance in certain situations. This chapter covers information about the IdM configuration, including files and logs used by the IdM server, and procedures for updating the IdM server configuration itself. 28.1. Identity Management Files and Logs Identity Management is a unifying framework that combines disparate Linux services into a single management context. However, the underlying technologies - such as Kerberos, DNS, 389 Directory Server, and Dogtag Certificate System - retain their own configuration files and log files. Identity Management directly manages each of these elements through their own configuration files and tools. This section covers the directories, files, and logs used specifically by IdM. For more information about the configuration files or logs for a specific server used within IdM, see the product documentation. 28.1.1. A Reference of IdM Server Configuration Files and Directories Table 28.1. IdM Server Configuration Files and Directories Directory or File Description Server Configuration /etc/ipa/ The main IdM configuration directory. /etc/ipa/default.conf The primary configuration file for IdM. /etc/ipa/server.conf An optional configuration file for IdM. This does not exist by default, but can be created to load custom configuration when the IdM server is started. /etc/ipa/cli.conf An optional configuration file for IdM command-line tools. This does not exist by default, but can be created to apply custom configuration when the ipa is used. /etc/ipa/ca.crt The CA certificate issued by the IdM server's CA. ~/.ipa/ A user-specific IdM directory that is created on the local system in the system user's home directory the first time the user runs an IdM command. IdM Logs ~/.ipa/log/cli.log The log file for errors returned by XML-RPC calls and responses by the IdM command-line tools. This is created in the home directory for the system user who runs the tools, who may have a different name than the IdM user. /var/log/ipaclient-install.log The installation log for the client service. /var/log/ipaserver-install.log The installation log for the IdM server. /etc/logrotate.d/ The log rotation policies for DNS, SSSD, Apache, Tomcat, and Kerberos. System Services /etc/rc.d/init.d/ipa/ The IdM server init script. Web UI /etc/ipa/html/ A symlink directory in the main configuration directory for the HTML files used by the IdM web UI. /etc/httpd/conf.d/ipa.conf /etc/httpd/conf.d/ipa-rewrite.conf The configuration files used by the Apache host for the web UI application. /etc/httpd/conf/ipa.keytab The keytab file used by the web UI service. /usr/share/ipa/ The main directory for all of the HTML files, scripts, and stylesheets used by the web UI. /usr/share/ipa/ipa-rewrite.conf /usr/share/ipa/ipa.conf The configuration files used by the Apache host for the web UI application. /usr/share/ipa/updates/ Contains any updated files, schema, and other elements for Identity Management. /usr/share/ipa/html/ Contains the HTML files, JavaScript files, and stylesheets used by the web UI. /usr/share/ipa/ipaclient/ Contains the JavaScript files used to access Firefox's autoconfiguration feature and set up the Firefox browser to work in the IdM Kerberos realm. /usr/share/ipa/migration/ Contains HTML pages, stylesheets, and Python scripts used for running the IdM server in migration mode. /usr/share/ipa/ui/ Contains all of the scripts used by the UI to perform IdM operations. /var/log/httpd/ The log files for the Apache web server. Kerberos /etc/krb5.conf The Kerberos service configuration file. SSSD /usr/share/sssd/sssd.api.d/sssd-ipa.conf The configuration file used to identify the IdM server, IdM Directory Server, and other IdM services used by SSSD. /var/log/sssd/ The log files for SSSD. 389 Directory Server /var/lib/dirsrv/slapd- REALM_NAME / All of the schema, configuration, and database files associated with the Directory Server instance used by the IdM server. /var/log/dirsrv/slapd- REALM_NAME / Log files associated with the Directory Server instance used by the IdM server. Dogtag Certificate System /etc/pki-ca/ The main directory for the IdM CA instance. /var/lib/pki-ca/conf/CS.cfg The main configuration file for the IdM CA instance. /var/lib/dirsrv/slapd-PKI-IPA/ All of the schema, configuration, and database files associated with the Directory Server instance used by the IdM CA. /var/log/dirsrv/slapd-PKI-IPA/ Log files associated with the Directory Server instance used by the IdM CA. Cache Files /var/cache/ipa/ Cache files for the IdM server and the IdM Kerberos password daemon. System Backups /var/lib/ipa/sysrestore/ Contains backups of all of the system files and scripts that were reconfigured when the IdM server was installed. These include the original .conf files for NSS, Kerberos (both krb5.conf and kdc.conf ), and NTP. /var/lib/ipa-client/sysrestore/ Contains backups of all of the system files and scripts that were reconfigured when the IdM client was installed. Commonly, this is the sssd.conf file for SSSD authentication services. 28.1.2. IdM Domain Services and Log Rotation The 389 Directory Server instances used by IdM as a backend and by the Dogtag Certificate System have their own internal log rotation policies. Log rotation settings such as the size of the file, the period between log rotation, and how long log files are preserved can all be configured by editing the 389 Directory Server configuration. This is covered in the Red Hat Directory Server Administrator's Guide . Several IdM domain services use the system logrotate service to handle log rotation and compression: named (DNS) httpd (Apache) tomcat6 sssd krb5kdc (Kerberos domain controller) Most of these policies use the logrotate defaults for the rotation schedule (weekly) and the archive of logs (four, for four weeks' worth of logs). The individual policies set post-rotation commands to restart the service after log rotation, that a missing log file is acceptable, and compression settings. Example 28.1. Default httpd Log Rotation File There are other potential log settings, like compress settings and the size of the log file, which can be edited in either the global logrotate configuration or in the individual policies. The logrotate settings are covered in the logrotate manpage. Warning Two policies set special create rules. All of the services create a new log file with the same name, default owner, and default permissions as the log. For the named and tomcat6 logs, the create is set with explicit permissions and user/group ownership. Do not change the permissions or the user and group which own the log files. This is required for both IdM operations and SELinux settings. Changing the ownership of the log roation policy or of the files can cause the IdM domains services to fail or to be unable to start. 28.1.3. About default.conf and Context Configuration Files Certain global defaults - like the realm information, the LDAP configuration, and the CA settings - are stored in the default.conf file. This configuration file is referenced when the IdM client and servers start and every time the ipa command is run to supply information as operations are performed. The parameters in the default.conf file are simple attribute=value pairs. The attributes are case-insensitive and order-insensitive. [global] basedn=dc=example,dc=com realm=EXAMPLE.COM domain=example.com xmlrpc_uri=https://server.example.com/ipa/xml ldap_uri=ldapi://%2fvar%2frun%2fslapd-EXAMPLE-COM.socket enable_ra=True ra_plugin=dogtag mode=production When adding more configuration attributes or overriding the global values, users can create additional context configuration files. A server.conf and cli.conf file can be created to create different options when the IdM server is started or when the ipa command is run, respectively. The IdM server checks the server.conf and cli.conf files first, and then checks the default.conf file. Any configuration files in the /etc/ipa directory apply to all users for the system. Users can set individual overrides by creating default.conf , server.conf , or cli.conf files in their local IdM directory, ~/.ipa/ . This optional file is merged with default.conf and used by the local IdM services. 28.1.4. Checking IdM Server Logs Identity Management unifies several different Linux services, so it relies on those services' native logs for tracking and debugging those services. The other services (Apache, 389 Directory Server, and Dogtag Certificate System) all have detailed logs and log levels. See the specific server documentation for more information on return codes, log formats, and log levels. Table 28.2. IdM Log Files Service Log File Description Additional Information IdM server /var/log/ipaserver-install.log Server installation log IdM server ~/.ipa/log/cli.log Command-line tool log IdM client /var/log/ipaclient-install.log Client installation log Apache server /var/log/httpd/access_log /var/log/httpd/error_log These are standard access and error logs for Apache servers. Both the web UI and the XML-RPC command-line interface use Apache, so some IdM-specific messages will be recorded in the error log along with the Apache messages. Apache log chapter Dogtag Certificate System /var/log/pki-ca-install.log The installation log for the IdM CA. Dogtag Certificate System /var/log/pki-ca/debug /var/log/pki-ca/system /var/log/pki-ca/transactions /var/log/pki-ca/signedAudit These logs mainly relate to certificate operations. In IdM, this is used for service principals, hosts, and other entities which use certificates. Logging chapter 389 Directory Server /var/log/dirsrv/slapd- REALM /access /var/log/dirsrv/slapd- REALM /audit /var/log/dirsrv/slapd- REALM /errors The access and error logs both contain detailed information about attempted access and operations for the domain Directory Server instance. The error log setting can be changed to provide very detailed output. The access log is buffered, so the server only writes to the log every 30 seconds, by default. Monitoring servers and databases Log entries explained 389 Directory Server /var/log/dirsrv/slapd- REALM /access /var/log/dirsrv/slapd- REALM /audit /var/log/dirsrv/slapd- REALM /errors This directory server instance is used by the IdM CA to store certificate information. Most operational data here will be related to server-replica interactions. The access log is buffered, so the server only writes to the log every 30 seconds, by default. Monitoring servers and databases Log entries explained Kerberos /var/log/krb5libs.log This is the primary log file for Kerberos connections. This location is configured in the krb5.conf file, so it could be different on some systems. Kerberos /var/log/krb5kdc.log This is the primary log file for the Kerberos KDC server. This location is configured in the krb5.conf file, so it could be different on some systems. Kerberos /var/log/kadmind.log This is the primary log file for the Kerberos administration server. This location is configured in the krb5.conf file, so it could be different on some systems. DNS /var/log/messages DNS error messages are included with other system messages. DNS logging is not enabled by default. DNS logging is enabled by running the querylog command: This begins writing log messages to the system's /var/log/messages file. To turn off logging, run the querylog command again. 28.1.4.1. Enabling Server Debug Logging Debug logging for the IdM server is set in the server.conf file. Note Editing the default.conf configuration file affects all IdM components, not only the IdM server. Edit or create the server.conf file. Add the debug line and set its value to true. Restart the Apache daemon to load the changes. 28.1.4.2. Debugging Command-Line Operations Any command-line operation with the ipa command can return debug information by using the -v option. For example: Using the option twice, -vv , displays the XML-RPC exchange: Important The -v and -vv options are global options and must be used before the subcommand when running ipa . | [
"cat /etc/logrotate.d/httpd /var/log/httpd/*log { missingok notifempty sharedscripts delaycompress postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript }",
"cat /etc/logrotate.d/named /var/named/data/named.run { missingok create 0644 named named postrotate /sbin/service named reload 2> /dev/null > /dev/null || true endscript }",
"[global] basedn=dc=example,dc=com realm=EXAMPLE.COM domain=example.com xmlrpc_uri=https://server.example.com/ipa/xml ldap_uri=ldapi://%2fvar%2frun%2fslapd-EXAMPLE-COM.socket enable_ra=True ra_plugin=dogtag mode=production",
"/usr/sbin/rndc querylog",
"vim server.conf",
"[global] debug=True",
"service httpd restart",
"ipa -v user-show admin ipa: INFO: trying https://ipaserver.example.com/ipa/xml First name: John Last name: Smythe User login [jsmythe]: ipa: INFO: Forwarding 'user_add' to server u'https://ipaserver.example.com/ipa/xml' -------------------- Added user \"jsmythe\" -------------------- User login: jsmythe First name: John Last name: Smythe Full name: John Smythe Display name: John Smythe Initials: JS Home directory: /home/jsmythe GECOS field: John Smythe Login shell: /bin/sh Kerberos principal: [email protected] UID: 1966800003 GID: 1966800003 Keytab: False Password: False",
"ipa -vv user-add ipa: INFO: trying https://ipaserver.example.com/ipa/xml First name: Jane Last name: Russell User login [jrussell]: ipa: INFO: Forwarding 'user_add' to server u'https://ipaserver.example.com/ipa/xml' send: u'POST /ipa/xml HTTP/1.0\\r\\nHost: ipaserver.example.com\\r\\nAccept-Language: en-us\\r\\nAuthorization: negotiate YIIFgQYJKoZIhvcSAQICAQBuggVwMIIFbKADAgEFoQMCAQ6iBwMFACAAAACjggGHYYIBgzCCAX+gAwIBBaEZGxdSSFRTLkVORy5CT1MuUkVESEFULkNPTaI5MDegAwIBA6EwMC4bBEhUVFAbJmRlbGwtcGUxODUwLTAxLnJodHMuZW5nLmJvcy5yZWRoYXQuY29to4IBIDCCARygAwIBEqEDAgECooIBDgSCAQpV2YEWv03X+SENdUBfOhMFGc3Fvnd51nELV0rIB1tfGVjpNlkuQxXKSfFKVD3vyAUqkii255T0mnXyTwayE93W1U4sOshetmG50zeU4KDmhuupzQZSCb5xB0KPU4HMDvP1UnDFJUGCk9tcqDJiYE+lJrEcz8H+Vxvvl+nP6yIdUQKqoEuNhJaLWIiT8ieAzk8zvmDlDzpFYtInGWe9D5ko1Bb7Npu0SEpdVJB2gnB5vszCIdLlzHM4JUqX8p21AZV0UYA6QZOWX9OXhqHdElKcuHCN2s9FBRoFYK83gf1voS7xSFlzZaFsEGHNmdA0qXbzREKGqUr8fmWmNvBGpDiR2ILQep09lL56JqSCA8owggPGoAMCARKiggO9BIIDuarbB67zjmBu9Ax2K+0klSD99pNv97h9yxol8c6NGLB4CmE8Mo39rL4MMXHeOS0OCbn+TD97XVGLu+cgkfVcuIG4PMMBoajuSnPmIf7qDvfa8IYDFlDDnRB7I//IXtCc/Z4rBbaxk0SMIRLrsKf5wha7aWtN1JbizBMQw+J0UlN8JjsWxu0Ls75hBtIDbPf3fva3vwBf7kTBChBsheewSAlck9qUglyNxAODgFVvRrXbfkw51Uo++9qHnhh+zFSWepfv7US7RYK0KxOKFd+uauY1ES+xlnMvK18ap2pcy0odBkKu1kwJDND0JXUdSY08MxK2zb/UGWrVEf6GIsBgu122UGiHp6C+0fEu+nRrvtORY65Bgi8E1vm55fbb/9dQWNcQheL9m6QJWPw0rgc+E5SO99ON6x3Vv2+Zk17EmbZXinPd2tDe7fJ9cS9o/z7Qjw8z8vvSzHL4GX7FKi2HJdBST3nEgOC8PqO46UnAJcA8pf1ZkwCK9xDWH+5PSph6WnvpRqugqf/6cq+3jk3MEjCrx+JBJ8QL6AgN3oEB4kvjZpAC+FfTkdX59VLDwfL/r0gMw3ZNk0nLLCLkiiYUMTEHZBzJw9kFbsX3LmS8qQQA6rQ2L782DYisElywfZ/0Sax8JO/G62Zvy7BHy7SQSGIvcdAOafeNyfxaWM1vTpvSh0GrnllYfs3FhZAKnVcLYrlPTapR23uLgRMv+0c9wAbwuUDfKgOOl5vAd1j55VUyapgDEzi/URsLdVdcF4zigt4KrTByCwU2/pI6FmEPqB2tsjM2A8JmqA+9Nt8bjaNdNwCOWE0dF50zeL9P8oodWGkbRZLk4DLIurpCW1d6IyTBhPQ5qZqHJWeoGiFa5y94zBpp27goMPmE0BskXT0JQmveYflOeKEMSzyiWPL2mwi7KEMtfgCpwTIGP2LRE/QxNvPGkwFfO+PDjZGVw+APKkMKqclVXxhtJA/2NmBrO1pZIIJ9R+41sR/QoACcXIUXJnhrTwwR1viKCB5Tec87gN+e0Cf0g+fmZuXNRscwJfhYQJYwJqdYzGtZW+h8jDWqa2EPcDwIQwyFAgXNQ/aMvh1yNTECpLEgrMhYmFAUDLQzI2BDnfbDftIs0rXjSC0oZn/Uaoqdr4F5syOrYAxH47bS6MW8CxyylreH8nT2qQXjenakLFHcNjt4M1nOc/igzNSeZ28qW9WSr4bCdkH+ra3BVpT/AF0WHWkxGF4vWr/iNHCjq8fLF+DsAEx0Zs696Rg0fWZy079A\\r\\nUser-Agent: xmlrpclib.py/1.0.1 (by www.pythonware.com)\\r\\nContent-Type: text/xml\\r\\nContent-Length: 1240\\r\\n\\r\\n' send: \"<?xml version='1.0' encoding='UTF-8'?>\\n<methodCall>\\n<methodName>user_add</methodName>\\n<params>\\n<param>\\n<value><array><data>\\n<value><string>jrussell</string></value>\\n</data></array></value>\\n</param>\\n<param>\\n<value><struct>\\n<member>\\n<name>all</name>\\n<value><boolean>0</boolean></value>\\n</member>\\n<member>\\n<name>displayname</name>\\n<value><string>Jane Russell</string></value>\\n</member>\\n<member>\\n<name>cn</name>\\n<value><string>Jane Russell</string></value>\\n</member>\\n<member>\\n<name>noprivate</name>\\n<value><boolean>0</boolean></value>\\n</member>\\n<member>\\n<name>uidnumber</name>\\n<value><int>999</int></value>\\n</member>\\n<member>\\n<name>raw</name>\\n<value><boolean>0</boolean></value>\\n</member>\\n<member>\\n<name>version</name>\\n<value><string>2.11</string></value>\\n</member>\\n<member>\\n<name>gecos</name>\\n<value><string>Jane Russell</string></value>\\n</member>\\n<member>\\n<name>sn</name>\\n<value><string>Russell</string></value>\\n</member>\\n<member>\\n<name>krbprincipalname</name>\\n<value><string>[email protected]</string></value>\\n</member>\\n<member>\\n<name>givenname</name>\\n<value><string>Jane</string></value>\\n</member>\\n<member>\\n<name>initials</name>\\n<value><string>JR</string></value>\\n</member>\\n</struct></value>\\n</param>\\n</params>\\n</methodCall>\\n\" reply: 'HTTP/1.1 200 OK\\r\\n' header: Date: Thu, 15 Sep 2011 00:50:39 GMT header: Server: Apache/2.2.15 (Red Hat) header: WWW-Authenticate: Negotiate YIGZBgkqhkiG9xIBAgICAG+BiTCBhqADAgEFoQMCAQ+iejB4oAMCARKicQRvVl5x6Zt9PbWNzvPEWkdu+3PTCq/ZVKjGHM+1zDBz81GL/f+/Pr75zTuveLYn9de0C3k27vz96fn2HQsy9qVH7sfqn0RWGQWzl+kDkuD6bJ/Dp/mpJvicW5gSkCSH6/UCNuE4I0xqwabLIz8MM/5o header: Connection: close header: Content-Type: text/xml; charset=utf-8 body: \"<?xml version='1.0' encoding='UTF-8'?>\\n<methodResponse>\\n<params>\\n<param>\\n<value><struct>\\n<member>\\n<name>result</name>\\n<value><struct>\\n<member>\\n<name>dn</name>\\n<value><string>uid=jrussell,cn=users,cn=accounts,dc=example,dc=com</string></value>\\n</member>\\n<member>\\n<name>has_keytab</name>\\n<value><boolean>0</boolean></value>\\n</member>\\n<member>\\n<name>displayname</name>\\n<value><array><data>\\n<value><string>Jane Russell</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>uid</name>\\n<value><array><data>\\n<value><string>jrussell</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>objectclass</name>\\n<value><array><data>\\n<value><string>top</string></value>\\n<value><string>person</string></value>\\n<value><string>organizationalperson</string></value>\\n<value><string>inetorgperson</string></value>\\n<value><string>inetuser</string></value>\\n<value><string>posixaccount</string></value>\\n<value><string>krbprincipalaux</string></value>\\n<value><string>krbticketpolicyaux</string></value>\\n<\" body: 'value><string>ipaobject</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>loginshell</name>\\n<value><array><data>\\n<value><string>/bin/sh</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>uidnumber</name>\\n<value><array><data>\\n<value><string>1966800004</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>initials</name>\\n<value><array><data>\\n<value><string>JR</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>gidnumber</name>\\n<value><array><data>\\n<value><string>1966800004</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>gecos</name>\\n<value><array><data>\\n<value><string>Jane Russell</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>sn</name>\\n<value><array><data>\\n<value><string>Russell</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>homedirectory</name>\\n<value><array><data>\\n<value><string>/home/jrussell</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>has_password</name>\\n<value><boolean>0</' body: 'boolean></value>\\n</member>\\n<member>\\n<name>krbprincipalname</name>\\n<value><array><data>\\n<value><string>[email protected]</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>givenname</name>\\n<value><array><data>\\n<value><string>Jane</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>cn</name>\\n<value><array><data>\\n<value><string>Jane Russell</string></value>\\n</data></array></value>\\n</member>\\n<member>\\n<name>ipauniqueid</name>\\n<value><array><data>\\n<value><string>bba27e6e-df34-11e0-a5f4-001143d2c060</string></value>\\n</data></array></value>\\n</member>\\n</struct></value>\\n</member>\\n<member>\\n<name>value</name>\\n<value><string>jrussell</string></value>\\n</member>\\n<member>\\n<name>summary</name>\\n<value><string>Added user \"jrussell\"</string></value>\\n</member>\\n</struct></value>\\n</param>\\n</params>\\n</methodResponse>\\n' --------------------- Added user \"jrussell\" --------------------- User login: jrussell First name: Jane Last name: Russell Full name: Jane Russell Display name: Jane Russell Initials: JR Home directory: /home/jrussell GECOS field: Jane Russell Login shell: /bin/sh Kerberos principal: [email protected] UID: 1966800004 GID: 1966800004 Keytab: False Password: False"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/server-config |
Chapter 15. Installing on OpenStack | Chapter 15. Installing on OpenStack 15.1. Preparing to install on OpenStack You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). 15.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 15.1.2. Choosing a method to install OpenShift Container Platform on OpenStack You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 15.1.2.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Red Hat OpenStack Platform (RHOSP) infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster on OpenStack with customizations : You can install a customized cluster on RHOSP. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on OpenStack with Kuryr : You can install a customized OpenShift Container Platform cluster on RHOSP that uses Kuryr SDN. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Installing a cluster on OpenStack in a restricted network : You can install OpenShift Container Platform on RHOSP in a restricted or disconnected network by creating an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 15.1.2.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on RHOSP infrastructure that you provision, by using one of the following methods: Installing a cluster on OpenStack on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure. By using this installation method, you can integrate your cluster with existing infrastructure and modifications. For installations on user-provisioned infrastructure, you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. You can use the provided Ansible playbooks to assist with the deployment process. Installing a cluster on OpenStack with Kuryr on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure that uses Kuryr SDN. Installing a cluster on OpenStack on your own SR-IOV infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure that uses single-root input/output virtualization (SR-IOV) networks to run compute machines. 15.2. Installing a cluster on OpenStack with customizations In OpenShift Container Platform version 4.9, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 15.2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.9 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You have the metadata service enabled in RHOSP. 15.2.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 15.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 15.2.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.2.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 15.2.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.2.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 15.2.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 15.2.5. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 15.2.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 15.2.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 15.2.8. Setting cloud provider options Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options based on the cloud configuration specification. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 #... 1 This property enables Octavia integration. 2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. Additional resources For more information about cloud provider configuration, see OpenStack cloud provider options . 15.2.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15.2.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Installation configuration parameters section for more information about the available parameters. 15.2.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 15.2.11. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 15.2.11.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 15.2. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 15.2.11.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 15.3. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 15.2.11.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 15.4. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 15.2.11.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 15.5. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 15.2.11.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 15.6. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 15.2.11.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 15.2.11.7. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an install-config.yaml file as part of the OpenShift Container Platform installation process. Procedure In the install-config.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor. Change the value of compute.platform.openstack.type to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet. An example bare metal install-config.yaml file controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. 3 If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet. Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: ./openshift-install wait-for install-complete --log-level debug 15.2.11.8. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 15.2.11.8.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 15.2.11.8.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. .Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 15.2.11.9. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 15.2.12. Setting compute machine affinity Optionally, you can set the affinity policy for compute machines during installation. The installer does not select an affinity policy for compute machines by default. You can also create machine sets that use particular RHOSP server groups after installation. Note Control plane machines are created with a soft-anti-affinity policy. Tip You can learn more about RHOSP instance scheduling and placement in the RHOSP documentation. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Using the RHOSP command-line interface, create a server group for your compute machines. For example: USD openstack \ --os-compute-api-version=2.15 \ server group create \ --policy anti-affinity \ my-openshift-worker-group For more information, see the server group create command documentation . Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: installation_directory Specifies the name of the directory that contains the install-config.yaml file for your cluster. Open manifests/99_openshift-cluster-api_worker-machineset-0.yaml , the MachineSet definition file. Add the property serverGroupID to the definition beneath the spec.template.spec.providerSpec.value property. For example: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> 1 Add the UUID of your server group here. Optional: Back up the manifests/99_openshift-cluster-api_worker-machineset-0.yaml file. The installation program deletes the manifests/ directory when creating the cluster. When you install the cluster, the installer uses the MachineSet definition that you modified to create compute machines within your RHOSP server group. 15.2.13. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.2.14. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 15.2.14.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 15.2.14.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 15.2.15. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 15.2.16. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 15.2.17. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 15.2.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.2.19. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 15.3. Installing a cluster on OpenStack with Kuryr In OpenShift Container Platform version 4.9, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP) that uses Kuryr SDN. To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 15.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.9 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . 15.3.2. About Kuryr SDN Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 15.3.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 15.7. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 15.3.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 15.3.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 15.3.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you must increase their listeners' default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud deploy command: (undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000 Note This is not needed for RHOSP 13.0.13+. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. In RHOSP versions earlier than 13.0.13, add the project ID to the octavia.conf configuration file after you create the project. To enforce network policies across services, like when traffic goes through the Octavia load balancer, you must ensure Octavia creates the Amphora VM security groups on the user project. This change ensures that required load balancer security groups belong to that project, and that they can be updated to enforce services isolation. Note This task is unnecessary in RHOSP version 13.0.13 or later. Octavia implements a new ACL API that restricts access to the load balancers VIP. Get the project ID USD openstack project show <project> Example output +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+ Add the project ID to octavia.conf for the controllers. Source the stackrc file: USD source stackrc # Undercloud credentials List the Overcloud controllers: USD openstack server list Example output +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ SSH into the controller(s). USD ssh [email protected] Edit the octavia.conf file to add the project into the list of projects where Amphora security groups are on the user's account. Restart the Octavia worker so the new configuration loads. controller-0USD sudo docker restart octavia_worker Note Depending on your RHOSP environment, Octavia might not support UDP listeners. If you use Kuryr SDN on RHOSP version 13.0.13 or earlier, UDP services are not supported. RHOSP version 16 or later support UDP. 15.3.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. You can configure your cluster to use the Octavia OVN driver after your RHOSP cloud is upgraded from version 13 to version 16. 15.3.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Octavia RHOSP versions before 13.0.13 do not support UDP listeners. Therefore, OpenShift Container Platform UDP services are not supported. Octavia RHOSP versions before 13.0.13 cannot listen to multiple protocols on the same port. Services that expose the same port to different protocols, like TCP and UDP, are not supported. Kuryr SDN does not support automatic unidling by a service. RHOSP environment limitations There are limitations when using Kuryr SDN that depend on your deployment environment. Because of Octavia's lack of support for the UDP protocol and multiple listeners, if the RHOSP version is earlier than 13.0.13, Kuryr forces pods to use TCP for DNS resolution. In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case, the native Go resolver does not recognize the use-vc option in resolv.conf , which controls whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails. To ensure that TCP forcing is allowed, compile applications either with the environment variable CGO_ENABLED set to 1 , i.e. CGO_ENABLED=1 , or ensure that the variable is absent. In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails. Note musl-based containers, including Alpine-based containers, do not support the use-vc option. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. Important If OpenShift Container Platform detects a new Octavia version that supports UDP load balancing, it recreates the DNS service automatically. The service recreation ensures that the service default supports UDP load balancing. The recreation causes the DNS service approximately one minute of downtime. 15.3.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.3.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 15.3.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.3.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 15.3.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 15.3.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 15.3.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 15.3.8. Setting cloud provider options Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options based on the cloud configuration specification. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 #... 1 This property enables Octavia integration. 2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. Additional resources For more information about cloud provider configuration, see OpenStack cloud provider options . 15.3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15.3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 15.3.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note Kuryr installations default to HTTP proxies. Prerequisites For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter: USD ip route add <cluster_network_cidr> via <installer_subnet_gateway> The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates. You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 15.3.11. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 15.3.11.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 15.8. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 15.3.11.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 15.9. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 15.3.11.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 15.10. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 15.3.11.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 15.11. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 15.3.11.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 15.12. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 15.3.11.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 15.3.11.7. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 3 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 15.3.11.8. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 15.3.11.8.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 15.3.11.8.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. .Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 15.3.11.9. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add ports to the pool when it is created, such as when a new host is added, or a new namespace is created. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 15.3.11.10. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set the value of enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports after a namespace is created or a new node is added to the cluster. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 15.3.12. Setting compute machine affinity Optionally, you can set the affinity policy for compute machines during installation. The installer does not select an affinity policy for compute machines by default. You can also create machine sets that use particular RHOSP server groups after installation. Note Control plane machines are created with a soft-anti-affinity policy. Tip You can learn more about RHOSP instance scheduling and placement in the RHOSP documentation. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Using the RHOSP command-line interface, create a server group for your compute machines. For example: USD openstack \ --os-compute-api-version=2.15 \ server group create \ --policy anti-affinity \ my-openshift-worker-group For more information, see the server group create command documentation . Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: installation_directory Specifies the name of the directory that contains the install-config.yaml file for your cluster. Open manifests/99_openshift-cluster-api_worker-machineset-0.yaml , the MachineSet definition file. Add the property serverGroupID to the definition beneath the spec.template.spec.providerSpec.value property. For example: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> 1 Add the UUID of your server group here. Optional: Back up the manifests/99_openshift-cluster-api_worker-machineset-0.yaml file. The installation program deletes the manifests/ directory when creating the cluster. When you install the cluster, the installer uses the MachineSet definition that you modified to create compute machines within your RHOSP server group. 15.3.13. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.3.14. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 15.3.14.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 15.3.14.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 15.3.15. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 15.3.16. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 15.3.17. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 15.3.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.3.19. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 15.4. Installing a cluster on OpenStack that supports SR-IOV-connected compute machines In OpenShift Container Platform version 4.9, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that can use compute machines with single-root I/O virtualization (SR-IOV) technology. 15.4.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Verify that OpenShift Container Platform 4.9 is compatible with your RHOSP version by using the "Supported platforms for OpenShift clusters" section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . Verify that your network configuration does not rely on a provider network. Provider networks are not supported. Have a storage service installed in RHOSP, like block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . Have metadata service enabled in RHOSP 15.4.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 15.13. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 15.4.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.4.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. Additionally, for clusters that use single-root input/output virtualization (SR-IOV), RHOSP compute nodes require a flavor that supports huge pages . Important SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. Additional resources For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 15.4.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.4.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 15.4.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 15.4.5. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 15.4.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 15.4.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15.4.8. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 15.4.8.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 15.4.9. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 15.4.9.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 15.14. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 15.4.9.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 15.15. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 15.4.9.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 15.16. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 15.4.9.4. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 15.4.9.5. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: ./openshift-install wait-for install-complete --log-level debug 15.4.9.6. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 15.4.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.4.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 15.4.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 15.4.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the file, do not define the following If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 15.4.12. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 15.4.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 15.4.14. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 15.4.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin The cluster is operational. Before you can add SR-IOV compute machines though, you must perform additional tasks. 15.4.16. Preparing a cluster that runs on RHOSP for SR-IOV Before you use single root I/O virtualization (SR-IOV) on a cluster that runs on Red Hat OpenStack Platform (RHOSP), make the RHOSP metadata service mountable as a drive and enable the No-IOMMU Operator for the virtual function I/O (VFIO) driver. 15.4.16.1. Enabling the RHOSP metadata service as a mountable drive You can apply a machine config to your machine pool that makes the Red Hat OpenStack Platform (RHOSP) metadata service available as a mountable drive. The following machine config enables the display of RHOSP network UUIDs from within the SR-IOV Network Operator. This configuration simplifies the association of SR-IOV resources to cluster SR-IOV resources. Procedure Create a machine config file from the following template: A mountable metadata service machine config file kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target 1 You can substitute a name of your choice. From a command line, apply the machine config: USD oc apply -f <machine_config_file_name>.yaml 15.4.16.2. Enabling the No-IOMMU feature for the RHOSP VFIO driver You can apply a machine config to your machine pool that enables the No-IOMMU feature for the Red Hat OpenStack Platform (RHOSP) virtual function I/O (VFIO) driver. The RHOSP vfio-pci driver requires this feature. Procedure Create a machine config file from the following template: A No-IOMMU VFIO machine config file kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK 1 You can substitute a name of your choice. From a command line, apply the machine config: USD oc apply -f <machine_config_file_name>.yaml The cluster is installed and prepared for SR-IOV configuration. Complete the post-installation SR-IOV tasks that are listed in the " steps" section. 15.4.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.4.18. steps To complete SR-IOV configuration for your cluster: Install the Performance Addon Operator . Configure the Performance Addon Operator with huge pages support . Install the SR-IOV Operator . Configure your SR-IOV network device . Add an SR-IOV compute machine set . Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 15.5. Installing a cluster on OpenStack on your own infrastructure In OpenShift Container Platform version 4.9, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 15.5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.9 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 15.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 15.5.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 15.17. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 15.5.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.5.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 15.5.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.5.4. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 15.5.5. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 15.5.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15.5.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.5.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.9 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 15.5.9. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 15.5.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 15.5.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 15.5.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 15.5.11. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 15.5.12. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 15.5.13. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 15.5.13.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 15.18. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 15.5.13.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 15.19. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 15.5.13.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 15.20. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 15.5.13.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 15.21. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 15.5.13.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 15.22. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 15.5.13.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 15.5.13.7. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 15.5.13.8. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 15.5.13.9. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 15.5.13.10. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 15.5.13.10.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 15.5.13.10.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. .Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 15.5.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 15.5.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 15.5.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 15.5.17. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform post-installation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" Optionally, you can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. 15.5.17.1. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: ./openshift-install wait-for install-complete --log-level debug 15.5.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 15.5.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 15.5.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 15.5.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 15.5.22. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 15.5.23. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 15.5.24. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 15.5.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.5.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 15.6. Installing a cluster on OpenStack with Kuryr on your own infrastructure In OpenShift Container Platform version 4.9, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 15.6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.9 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 15.6.2. About Kuryr SDN Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 15.6.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 15.23. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 15.6.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 15.6.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 15.6.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you must increase their listeners' default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud deploy command: (undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000 Note This is not needed for RHOSP 13.0.13+. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. In RHOSP versions earlier than 13.0.13, add the project ID to the octavia.conf configuration file after you create the project. To enforce network policies across services, like when traffic goes through the Octavia load balancer, you must ensure Octavia creates the Amphora VM security groups on the user project. This change ensures that required load balancer security groups belong to that project, and that they can be updated to enforce services isolation. Note This task is unnecessary in RHOSP version 13.0.13 or later. Octavia implements a new ACL API that restricts access to the load balancers VIP. Get the project ID USD openstack project show <project> Example output +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+ Add the project ID to octavia.conf for the controllers. Source the stackrc file: USD source stackrc # Undercloud credentials List the Overcloud controllers: USD openstack server list Example output +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ SSH into the controller(s). USD ssh [email protected] Edit the octavia.conf file to add the project into the list of projects where Amphora security groups are on the user's account. Restart the Octavia worker so the new configuration loads. controller-0USD sudo docker restart octavia_worker Note Depending on your RHOSP environment, Octavia might not support UDP listeners. If you use Kuryr SDN on RHOSP version 13.0.13 or earlier, UDP services are not supported. RHOSP version 16 or later support UDP. 15.6.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. 15.6.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Octavia RHOSP versions before 13.0.13 do not support UDP listeners. Therefore, OpenShift Container Platform UDP services are not supported. Octavia RHOSP versions before 13.0.13 cannot listen to multiple protocols on the same port. Services that expose the same port to different protocols, like TCP and UDP, are not supported. Kuryr SDN does not support automatic unidling by a service. RHOSP environment limitations There are limitations when using Kuryr SDN that depend on your deployment environment. Because of Octavia's lack of support for the UDP protocol and multiple listeners, if the RHOSP version is earlier than 13.0.13, Kuryr forces pods to use TCP for DNS resolution. In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case, the native Go resolver does not recognize the use-vc option in resolv.conf , which controls whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails. To ensure that TCP forcing is allowed, compile applications either with the environment variable CGO_ENABLED set to 1 , i.e. CGO_ENABLED=1 , or ensure that the variable is absent. In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails. Note musl-based containers, including Alpine-based containers, do not support the use-vc option. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. Important If OpenShift Container Platform detects a new Octavia version that supports UDP load balancing, it recreates the DNS service automatically. The service recreation ensures that the service default supports UDP load balancing. The recreation causes the DNS service approximately one minute of downtime. 15.6.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.6.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 15.6.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.6.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 15.6.5. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 15.6.6. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 15.6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15.6.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.6.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.9 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 15.6.10. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 15.6.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 15.6.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 15.6.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 15.6.12. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 15.6.13. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 15.6.14. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 15.6.14.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 15.24. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 15.6.14.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 15.25. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 15.6.14.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 15.26. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 15.6.14.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 15.27. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 15.6.14.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 15.28. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 15.6.14.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 15.6.14.7. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 3 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 15.6.14.8. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 15.6.14.8.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 15.6.14.8.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. .Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 15.6.14.9. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add ports to the pool when it is created, such as when a new host is added, or a new namespace is created. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 15.6.14.10. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set the value of enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports after a namespace is created or a new node is added to the cluster. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 15.6.14.11. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 15.6.14.12. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 15.6.14.13. Modifying the network type By default, the installation program selects the OpenShiftSDN network type. To use Kuryr instead, change the value in the installation configuration file that the program generated. Prerequisites You have the file install-config.yaml that was generated by the OpenShift Container Platform installation program Procedure In a command prompt, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["networkType"] = "Kuryr"; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set networking.networkType to "Kuryr" . 15.6.15. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 15.6.16. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 15.6.17. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 15.6.18. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform post-installation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" 15.6.19. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 15.6.20. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 15.6.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 15.6.22. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 15.6.23. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 15.6.24. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 15.6.25. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 15.6.26. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.6.27. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 15.7. Installing a cluster on OpenStack on your own SR-IOV infrastructure In OpenShift Container Platform 4.9, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure and uses single-root input/output virtualization (SR-IOV) networks to run compute machines. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, such as Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 15.7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.9 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . Your network configuration does not rely on a provider network. Provider networks are not supported. You have an RHOSP account where you want to install OpenShift Container Platform. On the machine where you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 15.7.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 15.7.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 15.29. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 15.7.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.7.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. Additionally, for clusters that use single-root input/output virtualization (SR-IOV), RHOSP compute nodes require a flavor that supports huge pages . Important SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. Additional resources For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 15.7.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.7.4. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 15.7.5. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 15.7.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15.7.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.7.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.9 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 15.7.9. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 15.7.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 15.7.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 15.7.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 15.7.11. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 15.7.12. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 15.7.13. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 15.7.13.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 15.30. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 15.7.13.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 15.31. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 15.7.13.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 15.32. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 15.7.13.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 15.33. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 15.7.13.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 15.34. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 15.7.13.6. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 15.7.13.7. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 15.7.13.8. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 15.7.13.9. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 15.7.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 15.7.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 15.7.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 15.7.17. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform post-installation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" Optionally, you can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. 15.7.17.1. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: ./openshift-install wait-for install-complete --log-level debug 15.7.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 15.7.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 15.7.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 15.7.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 15.7.22. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 15.7.23. Creating compute machines that run on SR-IOV networks After standing up the control plane, create compute machines that run on the SR-IOV networks that you created in "Creating SR-IOV networks for compute machines". Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The metadata.yaml file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. You created radio and uplink SR-IOV networks as described in "Creating SR-IOV networks for compute machines". Procedure On a command line, change the working directory to the location of the inventory.yaml and common.yaml files. Add the radio and uplink networks to the end of the inventory.yaml file by using the additionalNetworks parameter: .... # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' additionalNetworks: - id: radio count: 4 1 type: direct port_security_enabled: no - id: uplink count: 4 2 type: direct port_security_enabled: no 1 2 The count parameter defines the number of SR-IOV virtual functions (VFs) to attach to each worker node. In this case, each network has four VFs. Replace the content of the compute-nodes.yaml file with the following text: Example 15.1. compute-nodes.yaml - import_playbook: common.yaml - hosts: all gather_facts: no vars: worker_list: [] port_name_list: [] nic_list: [] tasks: # Create the SDN/primary port for each worker node - name: 'Create the Compute ports' os_port: name: "{{ item.1 }}-{{ item.0 }}" network: "{{ os_network }}" security_groups: - "{{ os_sg_worker }}" allowed_address_pairs: - ip_address: "{{ os_ingressVIP }}" with_indexed_items: "{{ [os_port_worker] * os_compute_nodes_number }}" register: ports # Tag each SDN/primary port with cluster name - name: 'Set Compute ports tag' command: cmd: "openstack port set --tag {{ cluster_id_tag }} {{ item.1 }}-{{ item.0 }}" with_indexed_items: "{{ [os_port_worker] * os_compute_nodes_number }}" - name: 'List the Compute Trunks' command: cmd: "openstack network trunk list" when: os_networking_type == "Kuryr" register: compute_trunks - name: 'Create the Compute trunks' command: cmd: "openstack network trunk create --parent-port {{ item.1.id }} {{ os_compute_trunk_name }}-{{ item.0 }}" with_indexed_items: "{{ ports.results }}" when: - os_networking_type == "Kuryr" - "os_compute_trunk_name|string not in compute_trunks.stdout" - name: 'Call additional-port processing' include_tasks: additional-ports.yaml # Create additional ports in OpenStack - name: 'Create additionalNetworks ports' os_port: name: "{{ item.0 }}-{{ item.1.name }}" vnic_type: "{{ item.1.type }}" network: "{{ item.1.uuid }}" port_security_enabled: "{{ item.1.port_security_enabled|default(omit) }}" no_security_groups: "{{ 'true' if item.1.security_groups is not defined else omit }}" security_groups: "{{ item.1.security_groups | default(omit) }}" with_nested: - "{{ worker_list }}" - "{{ port_name_list }}" # Tag the ports with the cluster info - name: 'Set additionalNetworks ports tag' command: cmd: "openstack port set --tag {{ cluster_id_tag }} {{ item.0 }}-{{ item.1.name }}" with_nested: - "{{ worker_list }}" - "{{ port_name_list }}" # Build the nic list to use for server create - name: Build nic list set_fact: nic_list: "{{ nic_list | default([]) + [ item.name ] }}" with_items: "{{ port_name_list }}" # Create the servers - name: 'Create the Compute servers' vars: worker_nics: "{{ [ item.1 ] | product(nic_list) | map('join','-') | map('regex_replace', '(.*)', 'port-name=\\1') | list }}" os_server: name: "{{ item.1 }}" image: "{{ os_image_rhcos }}" flavor: "{{ os_flavor_worker }}" auto_ip: no userdata: "{{ lookup('file', 'worker.ign') | string }}" security_groups: [] nics: "{{ [ 'port-name=' + os_port_worker + '-' + item.0|string ] + worker_nics }}" config_drive: yes with_indexed_items: "{{ worker_list }}" Insert the following content into a local file that is called additional-ports.yaml : Example 15.2. additional-ports.yaml # Build a list of worker nodes with indexes - name: 'Build worker list' set_fact: worker_list: "{{ worker_list | default([]) + [ item.1 + '-' + item.0 | string ] }}" with_indexed_items: "{{ [ os_compute_server_name ] * os_compute_nodes_number }}" # Ensure that each network specified in additionalNetworks exists - name: 'Verify additionalNetworks' os_networks_info: name: "{{ item.id }}" with_items: "{{ additionalNetworks }}" register: network_info # Expand additionalNetworks by the count parameter in each network definition - name: 'Build port and port index list for additionalNetworks' set_fact: port_list: "{{ port_list | default([]) + [ { 'net_name' : item.1.id, 'uuid' : network_info.results[item.0].openstack_networks[0].id, 'type' : item.1.type|default('normal'), 'security_groups' : item.1.security_groups|default(omit), 'port_security_enabled' : item.1.port_security_enabled|default(omit) } ] * item.1.count|default(1) }}" index_list: "{{ index_list | default([]) + range(item.1.count|default(1)) | list }}" with_indexed_items: "{{ additionalNetworks }}" # Calculate and save the name of the port # The format of the name is cluster_name-worker-workerID-networkUUID(partial)-count # i.e. fdp-nz995-worker-1-99bcd111-1 - name: 'Calculate port name' set_fact: port_name_list: "{{ port_name_list | default([]) + [ item.1 | combine( {'name' : item.1.uuid | regex_search('([^-]+)') + '-' + index_list[item.0]|string } ) ] }}" with_indexed_items: "{{ port_list }}" when: port_list is defined On a command line, run the compute-nodes.yaml playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml 15.7.24. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 15.7.25. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. The cluster is operational. Before you can configure it for SR-IOV networks though, you must perform additional tasks. 15.7.26. Preparing a cluster that runs on RHOSP for SR-IOV Before you use single root I/O virtualization (SR-IOV) on a cluster that runs on Red Hat OpenStack Platform (RHOSP), make the RHOSP metadata service mountable as a drive and enable the No-IOMMU Operator for the virtual function I/O (VFIO) driver. 15.7.26.1. Enabling the RHOSP metadata service as a mountable drive You can apply a machine config to your machine pool that makes the Red Hat OpenStack Platform (RHOSP) metadata service available as a mountable drive. The following machine config enables the display of RHOSP network UUIDs from within the SR-IOV Network Operator. This configuration simplifies the association of SR-IOV resources to cluster SR-IOV resources. Procedure Create a machine config file from the following template: A mountable metadata service machine config file kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target 1 You can substitute a name of your choice. From a command line, apply the machine config: USD oc apply -f <machine_config_file_name>.yaml 15.7.26.2. Enabling the No-IOMMU feature for the RHOSP VFIO driver You can apply a machine config to your machine pool that enables the No-IOMMU feature for the Red Hat OpenStack Platform (RHOSP) virtual function I/O (VFIO) driver. The RHOSP vfio-pci driver requires this feature. Procedure Create a machine config file from the following template: A No-IOMMU VFIO machine config file kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK 1 You can substitute a name of your choice. From a command line, apply the machine config: USD oc apply -f <machine_config_file_name>.yaml Note After you apply the machine config to the machine pool, you can watch the machine config pool status to see when the machines are available. The cluster is installed and prepared for SR-IOV configuration. You must now perform the SR-IOV configuration tasks in " steps". 15.7.27. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.7.28. Additional resources See Performance Addon Operator for low latency nodes for information about configuring your deployment for real-time running and low latency. 15.7.29. steps To complete SR-IOV configuration for your cluster: Install the Performance Addon Operator . Configure the Performance Addon Operator with huge pages support . Install the SR-IOV Operator . Configure your SR-IOV network device . Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 15.8. Installing a cluster on OpenStack in a restricted network In OpenShift Container Platform 4.9, you can install a cluster on Red Hat OpenStack Platform (RHOSP) in a restricted network by creating an internal mirror of the installation release content. 15.8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.9 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have the metadata service enabled in RHOSP. 15.8.2. About installations in restricted networks In OpenShift Container Platform 4.9, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 15.8.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 15.8.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 15.35. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 15.8.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.8.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 15.8.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 15.8.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 15.8.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 15.8.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 15.8.7. Setting cloud provider options Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options based on the cloud configuration specification. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 #... 1 This property enables Octavia integration. 2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. Additional resources For more information about cloud provider configuration, see OpenStack cloud provider options . 15.8.8. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network Red Hat OpenStack Platform (RHOSP) environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.9 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) image. Decompress the image. Note You must decompress the image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: Upload the image that you decompressed to a location that is accessible from the bastion server, like Glance. For example: Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 15.8.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible location. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . In the install-config.yaml file, set the value of platform.openstack.clusterOSImage to the image location or name. For example: platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to provide the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry, which can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which look like this excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release To complete these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 15.8.9.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note Kuryr installations default to HTTP proxies. Prerequisites For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter: USD ip route add <cluster_network_cidr> via <installer_subnet_gateway> The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates. You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 15.8.9.2. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 15.8.9.2.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 15.36. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 15.8.9.2.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 15.37. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 15.8.9.2.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 15.38. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 15.8.9.2.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 15.39. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 15.8.9.2.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 15.40. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 15.8.9.3. Sample customized install-config.yaml file for restricted OpenStack installations This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 15.8.10. Setting compute machine affinity Optionally, you can set the affinity policy for compute machines during installation. The installer does not select an affinity policy for compute machines by default. You can also create machine sets that use particular RHOSP server groups after installation. Note Control plane machines are created with a soft-anti-affinity policy. Tip You can learn more about RHOSP instance scheduling and placement in the RHOSP documentation. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Using the RHOSP command-line interface, create a server group for your compute machines. For example: USD openstack \ --os-compute-api-version=2.15 \ server group create \ --policy anti-affinity \ my-openshift-worker-group For more information, see the server group create command documentation . Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: installation_directory Specifies the name of the directory that contains the install-config.yaml file for your cluster. Open manifests/99_openshift-cluster-api_worker-machineset-0.yaml , the MachineSet definition file. Add the property serverGroupID to the definition beneath the spec.template.spec.providerSpec.value property. For example: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> 1 Add the UUID of your server group here. Optional: Back up the manifests/99_openshift-cluster-api_worker-machineset-0.yaml file. The installation program deletes the manifests/ directory when creating the cluster. When you install the cluster, the installer uses the MachineSet definition that you modified to create compute machines within your RHOSP server group. 15.8.11. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.8.12. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 15.8.12.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 15.8.12.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 15.8.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 15.8.14. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 15.8.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 15.8.16. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 15.8.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.8.18. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 15.9. OpenStack cloud configuration reference guide A cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). Use the following parameters in a cloud-provider configuration manifest file to configure your cluster. 15.9.1. OpenStack cloud provider options The cloud provider configuration, typically stored as a file named cloud.conf , controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). You can create a valid cloud.conf file by specifying the following options in it. 15.9.1.1. Global options The following options are used for RHOSP CCM authentication with the RHOSP Identity service, also known as Keystone. They are similiar to the global options that you can set by using the openstack CLI. Option Description auth-url The RHOSP Identity service URL. For example, http://128.110.154.166/identity . ca-file Optional. The CA certificate bundle file for communication with the RHOSP Identity service. If you use the HTTPS protocol with The Identity service URL, this option is required. domain-id The Identity service user domain ID. Leave this option unset if you are using Identity service application credentials. domain-name The Identity service user domain name. This option is not required if you set domain-id . tenant-id The Identity service project ID. Leave this option unset if you are using Identity service application credentials. In version 3 of the Identity API, which changed the identifier tenant to project , the value of tenant-id is automatically mapped to the project construct in the API. tenant-name The Identity service project name. username The Identity service user name. Leave this option unset if you are using Identity service application credentials. password The Identity service user password. Leave this option unset if you are using Identity service application credentials. region The Identity service region name. trust-id The Identity service trust ID. A trust represents the authorization of a user, or trustor, to delegate roles to another user, or trustee. Optionally, a trust authorizes the trustee to impersonate the trustor. You can find available trusts by querying the /v3/OS-TRUST/trusts endpoint of the Identity service API. 15.9.1.2. Load balancer options The cloud provider supports several load balancer options for deployments that use Octavia. Option Description use-octavia Whether or not to use Octavia for the LoadBalancer type of the service implementation rather than Neutron-LBaaS. The default value is true . floating-network-id Optional. The external network used to create floating IP addresses for load balancer virtual IP addresses (VIPs). If there are multiple external networks in the cloud, this option must be set or the user must specify loadbalancer.openstack.org/floating-network-id in the service annotation. lb-method The load balancing algorithm used to create the load balancer pool. For the Amphora provider the value can be ROUND_ROBIN , LEAST_CONNECTIONS , or SOURCE_IP . The default value is ROUND_ROBIN . For the OVN provider, only the SOURCE_IP_PORT algorithm is supported. For the Amphora provider, if using the LEAST_CONNECTIONS or SOURCE_IP methods, configure the create-monitor option as true in the cloud-provider-config config map on the openshift-config namespace and ETP:Local on the load-balancer type service to allow balancing algorithm enforcement in the client to service endpoint connections. lb-provider Optional. Used to specify the provider of the load balancer, for example, amphora or octavia . Only the Amphora and Octavia providers are supported. lb-version Optional. The load balancer API version. Only "v2" is supported. subnet-id The ID of the Networking service subnet on which load balancer VIPs are created. create-monitor Whether or not to create a health monitor for the service load balancer. A health monitor is required for services that declare externalTrafficPolicy: Local . The default value is false . This option is unsupported if you use RHOSP earlier than version 17 with the ovn provider. monitor-delay The interval in seconds by which probes are sent to members of the load balancer. The default value is 5 . monitor-max-retries The number of successful checks that are required to change the operating status of a load balancer member to ONLINE . The valid range is 1 to 10 , and the default value is 1 . monitor-timeout The time in seconds that a monitor waits to connect to the back end before it times out. The default value is 3 . 15.9.1.3. Metadata options Option Description search-order This configuration key affects the way that the provider retrieves metadata that relates to the instances in which it runs. The default value of configDrive,metadataService results in the provider retrieving instance metadata from the configuration drive first if available, and then the metadata service. Alternative values are: configDrive : Only retrieve instance metadata from the configuration drive. metadataService : Only retrieve instance metadata from the metadata service. metadataService,configDrive : Retrieve instance metadata from the metadata service first if available, and then retrieve instance metadata from the configuration drive. 15.10. Uninstalling a cluster on OpenStack You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP). 15.10.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites Have a copy of the installation program that you used to deploy the cluster. Have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 15.11. Uninstalling a cluster on RHOSP from your own infrastructure You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP) on user-provisioned infrastructure. 15.11.1. Downloading playbook dependencies The Ansible playbooks that simplify the removal process on user-provisioned infrastructure require several Python modules. On the machine where you will run the process, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 15.11.2. Removing a cluster from RHOSP that uses your own infrastructure You can remove an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) that uses your own infrastructure. To complete the removal process quickly, run several Ansible playbooks. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies." You have the playbooks that you used to install the cluster. You modified the playbooks that are prefixed with down- to reflect any changes that you made to their corresponding installation playbooks. For example, changes to the bootstrap.yaml file are reflected in the down-bootstrap.yaml file. All of the playbooks are in a common directory. Procedure On a command line, run the playbooks that you downloaded: USD ansible-playbook -i inventory.yaml \ down-bootstrap.yaml \ down-control-plane.yaml \ down-compute-nodes.yaml \ down-load-balancers.yaml \ down-network.yaml \ down-security-groups.yaml Remove any DNS record changes you made for the OpenShift Container Platform installation. OpenShift Container Platform is removed from your infrastructure. | [
"openstack role add --user <user> --project <project> swiftoperator",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3",
"./openshift-install wait-for install-complete --log-level debug",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"openstack --os-compute-api-version=2.15 server group create --policy anti-affinity my-openshift-worker-group",
"./openshift-install create manifests --dir <installation_directory>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>",
"(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml",
"- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787",
"(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose",
"(undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml",
"openstack project show <project>",
"+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+",
"source stackrc # Undercloud credentials",
"openstack server list",
"+--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+",
"ssh [email protected]",
"List of project IDs that are allowed to have Load balancer security groups belonging to them. amp_secgroup_allowed_projects = PROJECT_ID",
"controller-0USD sudo docker restart octavia_worker",
"openstack loadbalancer provider list",
"+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+",
"openstack role add --user <user> --project <project> swiftoperator",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"ip route add <cluster_network_cidr> via <installer_subnet_gateway>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-network-03-config.yml 1",
"ls <installation_directory>/manifests/cluster-network-*",
"cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5",
"openstack --os-compute-api-version=2.15 server group create --policy anti-affinity my-openshift-worker-group",
"./openshift-install create manifests --dir <installation_directory>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"openstack role add --user <user> --project <project> swiftoperator",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"openstack network create radio --provider-physical-network radio --provider-network-type flat --external",
"openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external",
"openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio",
"openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target",
"oc apply -f <machine_config_file_name>.yaml",
"kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK",
"oc apply -f <machine_config_file_name>.yaml",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-containers.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1",
"openshift-install --log-level debug wait-for install-complete",
"sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>",
"(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml",
"- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787",
"(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose",
"(undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml",
"openstack project show <project>",
"+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+",
"source stackrc # Undercloud credentials",
"openstack server list",
"+--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+",
"ssh [email protected]",
"List of project IDs that are allowed to have Load balancer security groups belonging to them. amp_secgroup_allowed_projects = PROJECT_ID",
"controller-0USD sudo docker restart octavia_worker",
"openstack loadbalancer provider list",
"+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-containers.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-network-03-config.yml 1",
"ls <installation_directory>/manifests/cluster-network-*",
"cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"networkType\"] = \"Kuryr\"; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1",
"openshift-install --log-level debug wait-for install-complete",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-containers.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"openstack network create radio --provider-physical-network radio --provider-network-type flat --external",
"openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external",
"openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio",
"openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink",
". If this value is non-empty, the corresponding floating IP will be attached to the bootstrap machine. This is needed for collecting logs in case of install failure. os_bootstrap_fip: '203.0.113.20' additionalNetworks: - id: radio count: 4 1 type: direct port_security_enabled: no - id: uplink count: 4 2 type: direct port_security_enabled: no",
"- import_playbook: common.yaml - hosts: all gather_facts: no vars: worker_list: [] port_name_list: [] nic_list: [] tasks: # Create the SDN/primary port for each worker node - name: 'Create the Compute ports' os_port: name: \"{{ item.1 }}-{{ item.0 }}\" network: \"{{ os_network }}\" security_groups: - \"{{ os_sg_worker }}\" allowed_address_pairs: - ip_address: \"{{ os_ingressVIP }}\" with_indexed_items: \"{{ [os_port_worker] * os_compute_nodes_number }}\" register: ports # Tag each SDN/primary port with cluster name - name: 'Set Compute ports tag' command: cmd: \"openstack port set --tag {{ cluster_id_tag }} {{ item.1 }}-{{ item.0 }}\" with_indexed_items: \"{{ [os_port_worker] * os_compute_nodes_number }}\" - name: 'List the Compute Trunks' command: cmd: \"openstack network trunk list\" when: os_networking_type == \"Kuryr\" register: compute_trunks - name: 'Create the Compute trunks' command: cmd: \"openstack network trunk create --parent-port {{ item.1.id }} {{ os_compute_trunk_name }}-{{ item.0 }}\" with_indexed_items: \"{{ ports.results }}\" when: - os_networking_type == \"Kuryr\" - \"os_compute_trunk_name|string not in compute_trunks.stdout\" - name: 'Call additional-port processing' include_tasks: additional-ports.yaml # Create additional ports in OpenStack - name: 'Create additionalNetworks ports' os_port: name: \"{{ item.0 }}-{{ item.1.name }}\" vnic_type: \"{{ item.1.type }}\" network: \"{{ item.1.uuid }}\" port_security_enabled: \"{{ item.1.port_security_enabled|default(omit) }}\" no_security_groups: \"{{ 'true' if item.1.security_groups is not defined else omit }}\" security_groups: \"{{ item.1.security_groups | default(omit) }}\" with_nested: - \"{{ worker_list }}\" - \"{{ port_name_list }}\" # Tag the ports with the cluster info - name: 'Set additionalNetworks ports tag' command: cmd: \"openstack port set --tag {{ cluster_id_tag }} {{ item.0 }}-{{ item.1.name }}\" with_nested: - \"{{ worker_list }}\" - \"{{ port_name_list }}\" # Build the nic list to use for server create - name: Build nic list set_fact: nic_list: \"{{ nic_list | default([]) + [ item.name ] }}\" with_items: \"{{ port_name_list }}\" # Create the servers - name: 'Create the Compute servers' vars: worker_nics: \"{{ [ item.1 ] | product(nic_list) | map('join','-') | map('regex_replace', '(.*)', 'port-name=\\\\1') | list }}\" os_server: name: \"{{ item.1 }}\" image: \"{{ os_image_rhcos }}\" flavor: \"{{ os_flavor_worker }}\" auto_ip: no userdata: \"{{ lookup('file', 'worker.ign') | string }}\" security_groups: [] nics: \"{{ [ 'port-name=' + os_port_worker + '-' + item.0|string ] + worker_nics }}\" config_drive: yes with_indexed_items: \"{{ worker_list }}\"",
"Build a list of worker nodes with indexes - name: 'Build worker list' set_fact: worker_list: \"{{ worker_list | default([]) + [ item.1 + '-' + item.0 | string ] }}\" with_indexed_items: \"{{ [ os_compute_server_name ] * os_compute_nodes_number }}\" Ensure that each network specified in additionalNetworks exists - name: 'Verify additionalNetworks' os_networks_info: name: \"{{ item.id }}\" with_items: \"{{ additionalNetworks }}\" register: network_info Expand additionalNetworks by the count parameter in each network definition - name: 'Build port and port index list for additionalNetworks' set_fact: port_list: \"{{ port_list | default([]) + [ { 'net_name' : item.1.id, 'uuid' : network_info.results[item.0].openstack_networks[0].id, 'type' : item.1.type|default('normal'), 'security_groups' : item.1.security_groups|default(omit), 'port_security_enabled' : item.1.port_security_enabled|default(omit) } ] * item.1.count|default(1) }}\" index_list: \"{{ index_list | default([]) + range(item.1.count|default(1)) | list }}\" with_indexed_items: \"{{ additionalNetworks }}\" Calculate and save the name of the port The format of the name is cluster_name-worker-workerID-networkUUID(partial)-count i.e. fdp-nz995-worker-1-99bcd111-1 - name: 'Calculate port name' set_fact: port_name_list: \"{{ port_name_list | default([]) + [ item.1 | combine( {'name' : item.1.uuid | regex_search('([^-]+)') + '-' + index_list[item.0]|string } ) ] }}\" with_indexed_items: \"{{ port_list }}\" when: port_list is defined",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1",
"openshift-install --log-level debug wait-for install-complete",
"kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target",
"oc apply -f <machine_config_file_name>.yaml",
"kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK",
"oc apply -f <machine_config_file_name>.yaml",
"openstack role add --user <user> --project <project> swiftoperator",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"file <name_of_downloaded_file>",
"openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"ip route add <cluster_network_cidr> via <installer_subnet_gateway>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"openstack --os-compute-api-version=2.15 server group create --policy anti-affinity my-openshift-worker-group",
"./openshift-install create manifests --dir <installation_directory>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk",
"sudo alternatives --set python /usr/bin/python3",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml down-control-plane.yaml down-compute-nodes.yaml down-load-balancers.yaml down-network.yaml down-security-groups.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/installing/installing-on-openstack |
Chapter 52. metric | Chapter 52. metric This chapter describes the commands under the metric command. 52.1. metric aggregates Get measurements of aggregated metrics. Usage: Table 52.1. Positional arguments Value Summary operations Operations to apply to time series search A query to filter resource. the syntax is a combination of attribute, operator and value. For example: id=90d58eea-70d7-4294-a49a-170dcdf44c3c would filter resource with a certain id. More complex queries can be built, e.g.: not (flavor_id!="1" and memory>=24). Use "" to force data to be interpreted as string. Supported operators are: not, and, ∧ or, ∨, >=, ⇐, !=, >, <, =, ==, eq, ne, lt, gt, ge, le, in, like, !=, >=, <=, like, in. Table 52.2. Command arguments Value Summary -h, --help Show this help message and exit --resource-type RESOURCE_TYPE Resource type to query --start START Beginning of the period --stop STOP End of the period --granularity GRANULARITY Granularity to retrieve --needed-overlap NEEDED_OVERLAP Percentage of overlap across datapoints --groupby GROUPBY Attribute to use to group resources --fill FILL Value to use when backfilling timestamps with missing values in a subset of series. Value should be a float or null . Table 52.3. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 52.4. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 52.5. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.2. metric archive-policy create Create an archive policy. Usage: Table 52.7. Positional arguments Value Summary name Name of the archive policy Table 52.8. Command arguments Value Summary -h, --help Show this help message and exit -d <DEFINITION>, --definition <DEFINITION> Two attributes (separated by , ) of an archive policy definition with its name and value separated with a : -b BACK_WINDOW, --back-window BACK_WINDOW Back window of the archive policy -m AGGREGATION_METHODS, --aggregation-method AGGREGATION_METHODS Aggregation method of the archive policy Table 52.9. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.10. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.11. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.3. metric archive-policy delete Delete an archive policy. Usage: Table 52.13. Positional arguments Value Summary name Name of the archive policy Table 52.14. Command arguments Value Summary -h, --help Show this help message and exit 52.4. metric archive-policy list List archive policies. Usage: Table 52.15. Command arguments Value Summary -h, --help Show this help message and exit Table 52.16. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 52.17. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 52.18. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.5. metric archive-policy-rule create Create an archive policy rule. Usage: Table 52.20. Positional arguments Value Summary name Rule name Table 52.21. Command arguments Value Summary -h, --help Show this help message and exit -a ARCHIVE_POLICY_NAME, --archive-policy-name ARCHIVE_POLICY_NAME Archive policy name -m METRIC_PATTERN, --metric-pattern METRIC_PATTERN Wildcard of metric name to match Table 52.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.6. metric archive-policy-rule delete Delete an archive policy rule. Usage: Table 52.26. Positional arguments Value Summary name Name of the archive policy rule Table 52.27. Command arguments Value Summary -h, --help Show this help message and exit 52.7. metric archive-policy-rule list List archive policy rules. Usage: Table 52.28. Command arguments Value Summary -h, --help Show this help message and exit Table 52.29. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 52.30. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 52.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.32. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.8. metric archive-policy-rule show Show an archive policy rule. Usage: Table 52.33. Positional arguments Value Summary name Name of the archive policy rule Table 52.34. Command arguments Value Summary -h, --help Show this help message and exit Table 52.35. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.36. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.37. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.38. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.9. metric archive-policy show Show an archive policy. Usage: Table 52.39. Positional arguments Value Summary name Name of the archive policy Table 52.40. Command arguments Value Summary -h, --help Show this help message and exit Table 52.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.10. metric archive-policy update Update an archive policy. Usage: Table 52.45. Positional arguments Value Summary name Name of the archive policy Table 52.46. Command arguments Value Summary -h, --help Show this help message and exit -d <DEFINITION>, --definition <DEFINITION> Two attributes (separated by , ) of an archive policy definition with its name and value separated with a : Table 52.47. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.48. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.49. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.50. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.11. metric benchmark measures add Do benchmark testing of adding measurements. Usage: Table 52.51. Positional arguments Value Summary metric Id or name of the metric Table 52.52. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --workers WORKERS, -w WORKERS Number of workers to use --count COUNT, -n COUNT Number of total measures to send --batch BATCH, -b BATCH Number of measures to send in each batch --timestamp-start TIMESTAMP_START, -s TIMESTAMP_START First timestamp to use --timestamp-end TIMESTAMP_END, -e TIMESTAMP_END Last timestamp to use --wait Wait for all measures to be processed Table 52.53. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.54. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.55. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.56. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.12. metric benchmark measures show Do benchmark testing of measurements show. Usage: Table 52.57. Positional arguments Value Summary metric Id or name of the metric Table 52.58. Command arguments Value Summary -h, --help Show this help message and exit --utc Return timestamps as utc --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --aggregation AGGREGATION Aggregation to retrieve --start START Beginning of the period --stop STOP End of the period --granularity GRANULARITY Granularity to retrieve --refresh Force aggregation of all known measures --resample RESAMPLE Granularity to resample time-series to (in seconds) --workers WORKERS, -w WORKERS Number of workers to use --count COUNT, -n COUNT Number of total measures to send Table 52.59. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 52.60. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.61. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.62. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.13. metric benchmark metric create Do benchmark testing of metric creation. Usage: Table 52.63. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --archive-policy-name ARCHIVE_POLICY_NAME, -a ARCHIVE_POLICY_NAME Name of the archive policy --workers WORKERS, -w WORKERS Number of workers to use --count COUNT, -n COUNT Number of metrics to create --keep, -k Keep created metrics Table 52.64. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.65. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.66. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.67. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.14. metric benchmark metric show Do benchmark testing of metric show. Usage: Table 52.68. Positional arguments Value Summary metric Id or name of the metrics Table 52.69. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --workers WORKERS, -w WORKERS Number of workers to use --count COUNT, -n COUNT Number of metrics to get Table 52.70. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.71. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.72. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.73. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.15. metric capabilities list List capabilities. Usage: Table 52.74. Command arguments Value Summary -h, --help Show this help message and exit Table 52.75. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.76. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.77. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.78. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.16. metric create Create a metric. Usage: Table 52.79. Positional arguments Value Summary METRIC_NAME Name of the metric Table 52.80. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --archive-policy-name ARCHIVE_POLICY_NAME, -a ARCHIVE_POLICY_NAME Name of the archive policy --unit UNIT, -u UNIT Unit of the metric Table 52.81. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.82. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.83. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.84. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.17. metric delete Delete a metric. Usage: Table 52.85. Positional arguments Value Summary metric Ids or names of the metric Table 52.86. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource 52.18. metric list List metrics. Usage: Table 52.87. Command arguments Value Summary -h, --help Show this help message and exit --limit <LIMIT> Number of metrics to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value --sort <SORT> Sort of metric attribute (example: user_id:desc- nullslast Table 52.88. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 52.89. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 52.90. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.91. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.19. metric measures add Add measurements to a metric. Usage: Table 52.92. Positional arguments Value Summary metric Id or name of the metric Table 52.93. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource -m MEASURE, --measure MEASURE Timestamp and value of a measure separated with a @ 52.20. metric measures aggregation Get measurements of aggregated metrics. Usage: Table 52.94. Command arguments Value Summary -h, --help Show this help message and exit --utc Return timestamps as utc -m METRIC [METRIC ... ], --metric METRIC [METRIC ... ] Metrics ids or metric name --aggregation AGGREGATION Granularity aggregation function to retrieve --reaggregation REAGGREGATION Groupby aggregation function to retrieve --start START Beginning of the period --stop STOP End of the period --granularity GRANULARITY Granularity to retrieve --needed-overlap NEEDED_OVERLAP Percent of datapoints in each metrics required --query QUERY A query to filter resource. the syntax is a combination of attribute, operator and value. For example: id=90d58eea-70d7-4294-a49a-170dcdf44c3c would filter resource with a certain id. More complex queries can be built, e.g.: not (flavor_id!="1" and memory>=24). Use "" to force data to be interpreted as string. Supported operators are: not, and, ∧ or, ∨, >=, ⇐, !=, >, <, =, ==, eq, ne, lt, gt, ge, le, in, like, !=, >=, <=, like, in. --resource-type RESOURCE_TYPE Resource type to query --groupby GROUPBY Attribute to use to group resources --refresh Force aggregation of all known measures --resample RESAMPLE Granularity to resample time-series to (in seconds) --fill FILL Value to use when backfilling timestamps with missing values in a subset of series. Value should be a float or null . Table 52.95. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 52.96. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 52.97. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.98. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.21. metric measures batch-metrics Usage: Table 52.99. Positional arguments Value Summary file File containing measurements to batch or - for stdin (see Gnocchi REST API docs for the format Table 52.100. Command arguments Value Summary -h, --help Show this help message and exit 52.22. metric measures batch-resources-metrics Usage: Table 52.101. Positional arguments Value Summary file File containing measurements to batch or - for stdin (see Gnocchi REST API docs for the format Table 52.102. Command arguments Value Summary -h, --help Show this help message and exit --create-metrics Create unknown metrics 52.23. metric measures show Get measurements of a metric. Usage: Table 52.103. Positional arguments Value Summary metric Id or name of the metric Table 52.104. Command arguments Value Summary -h, --help Show this help message and exit --utc Return timestamps as utc --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource --aggregation AGGREGATION Aggregation to retrieve --start START Beginning of the period --stop STOP End of the period --granularity GRANULARITY Granularity to retrieve --refresh Force aggregation of all known measures --resample RESAMPLE Granularity to resample time-series to (in seconds) Table 52.105. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 52.106. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 52.107. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.108. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.24. metric resource batch delete Delete a batch of resources based on attribute values. Usage: Table 52.109. Positional arguments Value Summary query A query to filter resource. the syntax is a combination of attribute, operator and value. For example: id=90d58eea-70d7-4294-a49a-170dcdf44c3c would filter resource with a certain id. More complex queries can be built, e.g.: not (flavor_id!="1" and memory>=24). Use "" to force data to be interpreted as string. Supported operators are: not, and, ∧ or, ∨, >=, ⇐, !=, >, <, =, ==, eq, ne, lt, gt, ge, le, in, like, !=, >=, <=, like, in. Table 52.110. Command arguments Value Summary -h, --help Show this help message and exit --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource Table 52.111. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.112. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.113. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.114. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.25. metric resource create Create a resource. Usage: Table 52.115. Positional arguments Value Summary resource_id Id of the resource Table 52.116. Command arguments Value Summary -h, --help Show this help message and exit --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource -a ATTRIBUTE, --attribute ATTRIBUTE Name and value of an attribute separated with a : -m ADD_METRIC, --add-metric ADD_METRIC Name:id of a metric to add -n CREATE_METRIC, --create-metric CREATE_METRIC Name:archive_policy_name of a metric to create Table 52.117. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.118. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.119. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.120. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.26. metric resource delete Delete a resource. Usage: Table 52.121. Positional arguments Value Summary resource_id Id of the resource Table 52.122. Command arguments Value Summary -h, --help Show this help message and exit 52.27. metric resource history Show the history of a resource. Usage: Table 52.123. Positional arguments Value Summary resource_id Id of a resource Table 52.124. Command arguments Value Summary -h, --help Show this help message and exit --details Show all attributes of generic resources --limit <LIMIT> Number of resources to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value --sort <SORT> Sort of resource attribute (example: user_id:desc- nullslast --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource Table 52.125. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 52.126. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 52.127. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.128. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.28. metric resource list List resources. Usage: Table 52.129. Command arguments Value Summary -h, --help Show this help message and exit --details Show all attributes of generic resources --history Show history of the resources --limit <LIMIT> Number of resources to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value --sort <SORT> Sort of resource attribute (example: user_id:desc- nullslast --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource Table 52.130. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 52.131. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 52.132. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.133. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.29. metric resource search Search resources with specified query rules. Usage: Table 52.134. Positional arguments Value Summary query A query to filter resource. the syntax is a combination of attribute, operator and value. For example: id=90d58eea-70d7-4294-a49a-170dcdf44c3c would filter resource with a certain id. More complex queries can be built, e.g.: not (flavor_id!="1" and memory>=24). Use "" to force data to be interpreted as string. Supported operators are: not, and, ∧ or, ∨, >=, ⇐, !=, >, <, =, ==, eq, ne, lt, gt, ge, le, in, like, !=, >=, <=, like, in. Table 52.135. Command arguments Value Summary -h, --help Show this help message and exit --details Show all attributes of generic resources --history Show history of the resources --limit <LIMIT> Number of resources to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value --sort <SORT> Sort of resource attribute (example: user_id:desc- nullslast --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource Table 52.136. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 52.137. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 52.138. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.139. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.30. metric resource show Show a resource. Usage: Table 52.140. Positional arguments Value Summary resource_id Id of a resource Table 52.141. Command arguments Value Summary -h, --help Show this help message and exit --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource Table 52.142. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.143. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.144. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.145. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.31. metric resource-type create Create a resource type. Usage: Table 52.146. Positional arguments Value Summary name Name of the resource type Table 52.147. Command arguments Value Summary -h, --help Show this help message and exit -a ATTRIBUTE, --attribute ATTRIBUTE Attribute definition, attribute_name:attribute_type:at tribute_is_required:attribute_type_option_name=attribu te_type_option_value:... For example: display_name:string:true:max_length=255 Table 52.148. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.149. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.150. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.151. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.32. metric resource-type delete Delete a resource type. Usage: Table 52.152. Positional arguments Value Summary name Name of the resource type Table 52.153. Command arguments Value Summary -h, --help Show this help message and exit 52.33. metric resource-type list List resource types. Usage: Table 52.154. Command arguments Value Summary -h, --help Show this help message and exit Table 52.155. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 52.156. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 52.157. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.158. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.34. metric resource-type show Show a resource type. Usage: Table 52.159. Positional arguments Value Summary name Name of the resource type Table 52.160. Command arguments Value Summary -h, --help Show this help message and exit Table 52.161. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.162. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.163. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.164. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.35. metric resource-type update Create a resource type. Usage: Table 52.165. Positional arguments Value Summary name Name of the resource type Table 52.166. Command arguments Value Summary -h, --help Show this help message and exit -a ATTRIBUTE, --attribute ATTRIBUTE Attribute definition, attribute_name:attribute_type:at tribute_is_required:attribute_type_option_name=attribu te_type_option_value:... For example: display_name:string:true:max_length=255 -r REMOVE_ATTRIBUTE, --remove-attribute REMOVE_ATTRIBUTE Attribute name Table 52.167. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.168. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.169. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.170. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.36. metric resource update Update a resource. Usage: Table 52.171. Positional arguments Value Summary resource_id Id of the resource Table 52.172. Command arguments Value Summary -h, --help Show this help message and exit --type RESOURCE_TYPE, -t RESOURCE_TYPE Type of resource -a ATTRIBUTE, --attribute ATTRIBUTE Name and value of an attribute separated with a : -m ADD_METRIC, --add-metric ADD_METRIC Name:id of a metric to add -n CREATE_METRIC, --create-metric CREATE_METRIC Name:archive_policy_name of a metric to create -d DELETE_METRIC, --delete-metric DELETE_METRIC Name of a metric to delete Table 52.173. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.174. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.175. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.176. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.37. metric server version Show the version of Gnocchi server. Usage: Table 52.177. Command arguments Value Summary -h, --help Show this help message and exit Table 52.178. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.179. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.180. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.181. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.38. metric show Show a metric. Usage: Table 52.182. Positional arguments Value Summary metric Id or name of the metric Table 52.183. Command arguments Value Summary -h, --help Show this help message and exit --resource-id RESOURCE_ID, -r RESOURCE_ID Id of the resource Table 52.184. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.185. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.186. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.187. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 52.39. metric status Show the status of measurements processing. Usage: Table 52.188. Command arguments Value Summary -h, --help Show this help message and exit Table 52.189. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 52.190. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 52.191. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 52.192. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack metric aggregates [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--resource-type RESOURCE_TYPE] [--start START] [--stop STOP] [--granularity GRANULARITY] [--needed-overlap NEEDED_OVERLAP] [--groupby GROUPBY] [--fill FILL] operations [search]",
"openstack metric archive-policy create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] -d <DEFINITION> [-b BACK_WINDOW] [-m AGGREGATION_METHODS] name",
"openstack metric archive-policy delete [-h] name",
"openstack metric archive-policy list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]",
"openstack metric archive-policy-rule create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] -a ARCHIVE_POLICY_NAME -m METRIC_PATTERN name",
"openstack metric archive-policy-rule delete [-h] name",
"openstack metric archive-policy-rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]",
"openstack metric archive-policy-rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] name",
"openstack metric archive-policy show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] name",
"openstack metric archive-policy update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] -d <DEFINITION> name",
"openstack metric benchmark measures add [-h] [--resource-id RESOURCE_ID] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--workers WORKERS] --count COUNT [--batch BATCH] [--timestamp-start TIMESTAMP_START] [--timestamp-end TIMESTAMP_END] [--wait] metric",
"openstack metric benchmark measures show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--utc] [--resource-id RESOURCE_ID] [--aggregation AGGREGATION] [--start START] [--stop STOP] [--granularity GRANULARITY] [--refresh] [--resample RESAMPLE] [--workers WORKERS] --count COUNT metric",
"openstack metric benchmark metric create [-h] [--resource-id RESOURCE_ID] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--archive-policy-name ARCHIVE_POLICY_NAME] [--workers WORKERS] --count COUNT [--keep]",
"openstack metric benchmark metric show [-h] [--resource-id RESOURCE_ID] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--workers WORKERS] --count COUNT metric [metric ...]",
"openstack metric capabilities list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]",
"openstack metric create [-h] [--resource-id RESOURCE_ID] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--archive-policy-name ARCHIVE_POLICY_NAME] [--unit UNIT] [METRIC_NAME]",
"openstack metric delete [-h] [--resource-id RESOURCE_ID] metric [metric ...]",
"openstack metric list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT>]",
"openstack metric measures add [-h] [--resource-id RESOURCE_ID] -m MEASURE metric",
"openstack metric measures aggregation [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--utc] -m METRIC [METRIC ...] [--aggregation AGGREGATION] [--reaggregation REAGGREGATION] [--start START] [--stop STOP] [--granularity GRANULARITY] [--needed-overlap NEEDED_OVERLAP] [--query QUERY] [--resource-type RESOURCE_TYPE] [--groupby GROUPBY] [--refresh] [--resample RESAMPLE] [--fill FILL]",
"openstack metric measures batch-metrics [-h] file",
"openstack metric measures batch-resources-metrics [-h] [--create-metrics] file",
"openstack metric measures show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--utc] [--resource-id RESOURCE_ID] [--aggregation AGGREGATION] [--start START] [--stop STOP] [--granularity GRANULARITY] [--refresh] [--resample RESAMPLE] metric",
"openstack metric resource batch delete [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--type RESOURCE_TYPE] query",
"openstack metric resource create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--type RESOURCE_TYPE] [-a ATTRIBUTE] [-m ADD_METRIC] [-n CREATE_METRIC] resource_id",
"openstack metric resource delete [-h] resource_id",
"openstack metric resource history [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--details] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT>] [--type RESOURCE_TYPE] resource_id",
"openstack metric resource list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--details] [--history] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT>] [--type RESOURCE_TYPE]",
"openstack metric resource search [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--details] [--history] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT>] [--type RESOURCE_TYPE] query",
"openstack metric resource show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--type RESOURCE_TYPE] resource_id",
"openstack metric resource-type create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-a ATTRIBUTE] name",
"openstack metric resource-type delete [-h] name",
"openstack metric resource-type list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]",
"openstack metric resource-type show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] name",
"openstack metric resource-type update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-a ATTRIBUTE] [-r REMOVE_ATTRIBUTE] name",
"openstack metric resource update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--type RESOURCE_TYPE] [-a ATTRIBUTE] [-m ADD_METRIC] [-n CREATE_METRIC] [-d DELETE_METRIC] resource_id",
"openstack metric server version [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]",
"openstack metric show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--resource-id RESOURCE_ID] metric",
"openstack metric status [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/metric |
Chapter 5. About the Migration Toolkit for Containers | Chapter 5. About the Migration Toolkit for Containers The Migration Toolkit for Containers (MTC) enables you to migrate stateful application workloads from OpenShift Container Platform 3 to 4.17 at the granularity of a namespace. Important Before you begin your migration, be sure to review the differences between OpenShift Container Platform 3 and 4 . MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. The MTC console is installed on the target cluster by default. You can configure the Migration Toolkit for Containers Operator to install the console on an OpenShift Container Platform 3 source cluster or on a remote cluster . MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. The service catalog is deprecated in OpenShift Container Platform 4. You can migrate workload resources provisioned with the service catalog from OpenShift Container Platform 3 to 4 but you cannot perform service catalog actions such as provision , deprovision , or update on these workloads after migration. The MTC console displays a message if the service catalog resources cannot be migrated. 5.1. Terminology Table 5.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 5.2. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.17 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. 5.3. About data copy methods The Migration Toolkit for Containers (MTC) supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 5.3.1. File system copy method MTC copies data files from the source cluster to the replication repository, and from there to the target cluster. The file system copy method uses Restic for indirect migration or Rsync for direct volume migration. Table 5.2. File system copy method summary Benefits Limitations Clusters can have different storage classes. Supported for all S3 storage providers. Optional data verification with checksum. Supports direct volume migration, which significantly increases performance. Slower than the snapshot copy method. Optional data verification significantly reduces performance. Note The Restic and Rsync PV migration assumes that the PVs supported are only volumeMode=filesystem . Using volumeMode=Block for file system migration is not supported. 5.3.2. Snapshot copy method MTC copies a snapshot of the source cluster data to the replication repository of a cloud provider. The data is restored on the target cluster. The snapshot copy method can be used with Amazon Web Services, Google Cloud Provider, and Microsoft Azure. Table 5.3. Snapshot copy method summary Benefits Limitations Faster than the file system copy method. Cloud provider must support snapshots. Clusters must be on the same cloud provider. Clusters must be in the same location or region. Clusters must have the same storage class. Storage class must be compatible with snapshots. Does not support direct volume migration. 5.4. Direct volume migration and direct image migration You can use direct image migration (DIM) and direct volume migration (DVM) to migrate images and data directly from the source cluster to the target cluster. If you run DVM with nodes that are in different availability zones, the migration might fail because the migrated pods cannot access the persistent volume claim. DIM and DVM have significant performance benefits because the intermediate steps of backing up files from the source cluster to the replication repository and restoring files from the replication repository to the target cluster are skipped. The data is transferred with Rsync . DIM and DVM have additional prerequisites. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migrating_from_version_3_to_4/about-mtc-3-4 |
21.4. The guestfish Shell | 21.4. The guestfish Shell guestfish is an interactive shell that you can use from the command line or from shell scripts to access guest virtual machine file systems. All of the functionality of the libguestfs API is available from the shell. To begin viewing or editing a virtual machine disk image, enter the following command, substituting the path to your intended disk image: --ro means that the disk image is opened read-only. This mode is always safe but does not allow write access. Only omit this option when you are certain that the guest virtual machine is not running, or the disk image is not attached to a live guest virtual machine. It is not possible to use libguestfs to edit a live guest virtual machine, and attempting to will result in irreversible disk corruption. /path/to/disk/image is the path to the disk. This can be a file, a host physical machine logical volume (such as /dev/VG/LV), or a SAN LUN (/dev/sdf3). Note libguestfs and guestfish do not require root privileges. You only need to run them as root if the disk image being accessed needs root to read or write or both. When you start guestfish interactively, it will display this prompt: At the prompt, type run to initiate the library and attach the disk image. This can take up to 30 seconds the first time it is done. Subsequent starts will complete much faster. Note libguestfs will use hardware virtualization acceleration such as KVM (if available) to speed up this process. Once the run command has been entered, other commands can be used, as the following section demonstrates. 21.4.1. Viewing File Systems with guestfish This section provides information on viewing file systems with guestfish. 21.4.1.1. Manual Listing and Viewing The list-filesystems command will list file systems found by libguestfs. This output shows a Red Hat Enterprise Linux 4 disk image: Other useful commands are list-devices , list-partitions , lvs , pvs , vfs-type and file . You can get more information and help on any command by typing help command , as shown in the following output: To view the actual contents of a file system, it must first be mounted. You can use guestfish commands such as ls , ll , cat , more , download and tar-out to view and download files and directories. Note There is no concept of a current working directory in this shell. Unlike ordinary shells, you cannot for example use the cd command to change directories. All paths must be fully qualified starting at the top with a forward slash ( / ) character. Use the Tab key to complete paths. To exit from the guestfish shell, type exit or enter Ctrl+d . 21.4.1.2. Via guestfish inspection Instead of listing and mounting file systems by hand, it is possible to let guestfish itself inspect the image and mount the file systems as they would be in the guest virtual machine. To do this, add the -i option on the command line: Because guestfish needs to start up the libguestfs back end in order to perform the inspection and mounting, the run command is not necessary when using the -i option. The -i option works for many common Linux guest virtual machines. 21.4.1.3. Accessing a guest virtual machine by name A guest virtual machine can be accessed from the command line when you specify its name as known to libvirt (in other words, as it appears in virsh list --all ). Use the -d option to access a guest virtual machine by its name, with or without the -i option: 21.4.2. Adding Files with guestfish To add a file with guestfish you need to have the complete URI. The file can be a local file or a file located on a network block device (NBD) or a remote block device (RBD). The format used for the URI should be like any of these examples. For local files, use ///: guestfish -a disk .img guestfish -a file:/// directory / disk .img guestfish -a nbd:// example.com [ : port ] guestfish -a nbd:// example.com [ : port ]/ exportname guestfish -a nbd://?socket=/ socket guestfish -a nbd:/// exportname ?socket=/ socket guestfish -a rbd:/// pool / disk guestfish -a rbd:// example.com [ : port ]/ pool / disk 21.4.3. Modifying Files with guestfish To modify files, create directories or make other changes to a guest virtual machine, first heed the warning at the beginning of this section: your guest virtual machine must be shut down . Editing or changing a running disk with guestfish will result in disk corruption. This section gives an example of editing the /boot/grub/grub.conf file. When you are sure the guest virtual machine is shut down you can omit the --ro flag in order to get write access using a command such as: Commands to edit files include edit , vi and emacs . Many commands also exist for creating files and directories, such as write , mkdir , upload and tar-in . 21.4.4. Other Actions with guestfish You can also format file systems, create partitions, create and resize LVM logical volumes and much more, with commands such as mkfs , part-add , lvresize , lvcreate , vgcreate and pvcreate . 21.4.5. Shell Scripting with guestfish Once you are familiar with using guestfish interactively, according to your needs, writing shell scripts with it may be useful. The following is a simple shell script to add a new MOTD (message of the day) to a guest: 21.4.6. Augeas and libguestfs Scripting Combining libguestfs with Augeas can help when writing scripts to manipulate Linux guest virtual machine configuration. For example, the following script uses Augeas to parse the keyboard configuration of a guest virtual machine, and to print out the layout. Note that this example only works with guest virtual machines running Red Hat Enterprise Linux: Augeas can also be used to modify configuration files. You can modify the above script to change the keyboard layout: Note the three changes between the two scripts: The --ro option has been removed in the second example, giving the ability to write to the guest virtual machine. The aug-get command has been changed to aug-set to modify the value instead of fetching it. The new value will be "gb" (including the quotes). The aug-save command is used here so Augeas will write the changes out to disk. Note More information about Augeas can be found on the website http://augeas.net . guestfish can do much more than we can cover in this introductory document. For example, creating disk images from scratch: Or copying out whole directories from a disk image: For more information see the man page guestfish(1). | [
"guestfish --ro -a /path/to/disk/image",
"guestfish --ro -a /path/to/disk/image Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell ><fs>",
"><fs> run ><fs> list-filesystems /dev/vda1: ext3 /dev/VolGroup00/LogVol00: ext3 /dev/VolGroup00/LogVol01: swap",
"><fs> help vfs-type NAME vfs-type - get the Linux VFS type corresponding to a mounted device SYNOPSIS vfs-type mountable DESCRIPTION This command gets the filesystem type corresponding to the filesystem on \"device\". For most filesystems, the result is the name of the Linux VFS module which would be used to mount this filesystem if you mounted it without specifying the filesystem type. For example a string such as \"ext3\" or \"ntfs\".",
"guestfish --ro -a /path/to/disk/image -i Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell Operating system: Red Hat Enterprise Linux AS release 4 (Nahant Update 8) /dev/VolGroup00/LogVol00 mounted on / /dev/vda1 mounted on /boot ><fs> ll / total 210 drwxr-xr-x. 24 root root 4096 Oct 28 09:09 . drwxr-xr-x 21 root root 4096 Nov 17 15:10 .. drwxr-xr-x. 2 root root 4096 Oct 27 22:37 bin drwxr-xr-x. 4 root root 1024 Oct 27 21:52 boot drwxr-xr-x. 4 root root 4096 Oct 27 21:21 dev drwxr-xr-x. 86 root root 12288 Oct 28 09:09 etc",
"guestfish --ro -d GuestName -i",
"guestfish -d RHEL3 -i Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell Operating system: Red Hat Enterprise Linux AS release 3 (Taroon Update 9) /dev/vda2 mounted on / /dev/vda1 mounted on /boot ><fs> edit /boot/grub/grub.conf",
"#!/bin/bash - set -e guestname=\"USD1\" guestfish -d \"USDguestname\" -i <<'EOF' write /etc/motd \"Welcome to Acme Incorporated.\" chmod 0644 /etc/motd EOF",
"#!/bin/bash - set -e guestname=\"USD1\" guestfish -d \"USD1\" -i --ro <<'EOF' aug-init / 0 aug-get /files/etc/sysconfig/keyboard/LAYOUT EOF",
"#!/bin/bash - set -e guestname=\"USD1\" guestfish -d \"USD1\" -i <<'EOF' aug-init / 0 aug-set /files/etc/sysconfig/keyboard/LAYOUT '\"gb\"' aug-save EOF",
"guestfish -N fs",
"><fs> copy-out /home /tmp/home"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_disk_access_with_offline_tools-the_guestfish_shell |
function::caller | function::caller Name function::caller - Return name and address of calling function Synopsis Arguments None Description This function returns the address and name of the calling function. This is equivalent to calling: sprintf( " s 0x x " , symname( caller_addr ), caller_addr ) | [
"caller:string()"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-caller |
Chapter 1. Release notes for the Red Hat OpenShift distributed tracing platform | Chapter 1. Release notes for the Red Hat OpenShift distributed tracing platform 1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis You can use the Red Hat OpenShift distributed tracing platform (Tempo) in combination with the Red Hat build of OpenTelemetry . Note Only supported features are documented. Undocumented features are currently unsupported. If you need assistance with a feature, contact Red Hat's support. 1.2. Release notes for Red Hat OpenShift distributed tracing platform 3.5 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.2.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) 3.5 is provided through the Tempo Operator 0.15.3 . Note The Red Hat OpenShift distributed tracing platform (Tempo) 3.5 is based on the open source Grafana Tempo 2.7.1. 1.2.1.1. New features and enhancements This update introduces the following enhancements: With this update, you can configure the Tempo backend services to report the internal tracing data by using the OpenTelemetry Protocol (OTLP). With this update, the traces.span.metrics namespace becomes the default metrics namespace on which the Jaeger query retrieves the Prometheus metrics. The purpose of this change is to provide compatibility with the OpenTelemetry Collector version 0.109.0 and later where this namespace is the default. Customers who are still using an earlier OpenTelemetry Collector version can configure this namespace by adding the following field and value: spec.template.queryFrontend.jaegerQuery.monitorTab.redMetricsNamespace: "" . 1.2.1.2. Bug fixes This update introduces the following bug fix: Before this update, the Tempo Operator failed when the TempoStack custom resource had the spec.storage.tls.enabled field set to true and used an Amazon S3 object store with the Security Token Service (STS) authentication. With this update, such a TempoStack custom resource configuration does not cause the Tempo Operator to fail. 1.2.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Warning Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see the following resources: Migrating (Red Hat build of OpenTelemetry documentation) Installing (Red Hat build of OpenTelemetry documentation) Installing (distributed tracing platform (Tempo) documentation) Jaeger Deprecation and Removal in OpenShift (Red Hat Knowledgebase solution) Note The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is based on the open source Jaeger release 1.65.0. The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is provided through the Red Hat OpenShift distributed tracing platform Operator 1.65.0 . Important Jaeger does not use FIPS validated cryptographic modules. 1.2.2.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. Additional resources Jaeger Deprecation and Removal in OpenShift (Red Hat Knowledgebase) Migrating (Red Hat build of OpenTelemetry documentation) Installing (Red Hat build of OpenTelemetry documentation) Installing (Distributed tracing platform (Tempo) documentation) 1.2.2.2. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.3. Release notes for Red Hat OpenShift distributed tracing platform 3.4 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.3.1. CVEs This release fixes the following CVEs: CVE-2024-21536 CVE-2024-43796 CVE-2024-43799 CVE-2024-43800 CVE-2024-45296 CVE-2024-45590 CVE-2024-45811 CVE-2024-45812 CVE-2024-47068 Cross-site Scripting (XSS) in serialize-javascript 1.3.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) 3.4 is provided through the Tempo Operator 0.14.1 . Note The Red Hat OpenShift distributed tracing platform (Tempo) 3.4 is based on the open source Grafana Tempo 2.6.1. 1.3.2.1. New features and enhancements This update introduces the following enhancements: The monitor tab in the Jaeger UI for TempoStack instances uses a new default metrics namespace: traces.span.metrics . Before this update, the Jaeger UI used an empty namespace. The new traces.span.metrics namespace default is also used by the OpenTelemetry Collector 0.113.0. You can set the empty value for the metrics namespace by using the following field in the TempoStack custom resouce: spec.template.queryFrontend.monitorTab.redMetricsNamespace: "" . Warning This is a breaking change. If you are using both the Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat build of OpenTelemetry, you must upgrade to the Red Hat build of OpenTelemetry 3.4 before upgrading to the Red Hat OpenShift distributed tracing platform (Tempo) 3.4. New and optional spec.timeout field in the TempoStack and TempoMonolithic custom resource definitions for configuring one timeout value for all components. The timeout value is set to 30 seconds, 30s , by default. Warning This is a breaking change. 1.3.2.2. Bug fixes This update introduces the following bug fixes: Before this update, the distributed tracing platform (Tempo) failed on the IBM Z ( s390x ) architecture. With this update, the distributed tracing platform (Tempo) is available for the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Before this update, the distributed tracing platform (Tempo) failed on clusters with non-private networks. With this update, you can deploy the distributed tracing platform (Tempo) on clusters with non-private networks. ( TRACING-4507 ) Before this update, the Jaeger UI might fail due to reaching a trace quantity limit, resulting in the 504 Gateway Timeout error in the tempo-query logs. After this update, the issue is resolved by introducing two optional fields in the tempostack or tempomonolithic custom resource: New spec.timeout field for configuring the timeout. New spec.template.queryFrontend.jaegerQuery.findTracesConcurrentRequests field for improving the query performance of the Jaeger UI. Tip One querier can handle up to 20 concurrent queries by default. Increasing the number of concurrent queries further is achieved by scaling up the querier instances. 1.3.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) 3.4 is provided through the Red Hat OpenShift distributed tracing platform Operator 1.62.0 . Note The Red Hat OpenShift distributed tracing platform (Jaeger) 3.4 is based on the open source Jaeger release 1.62.0. Important Jaeger does not use FIPS validated cryptographic modules. 1.3.3.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.4 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.3.3.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.4, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see Migrating in the Red Hat build of OpenTelemetry documentation, Installing in the Red Hat build of OpenTelemetry documentation, and Installing in the distributed tracing platform (Tempo) documentation. Additional resources Jaeger Deprecation and Removal in OpenShift (Red Hat Knowledgebase) Migrating (Red Hat build of OpenTelemetry documentation) Installing (Red Hat build of OpenTelemetry documentation) Installing (Distributed tracing platform (Tempo) documentation) 1.3.3.3. Bug fixes This update introduces the following bug fix: Before this update, the Jaeger UI could fail with the 502 - Bad Gateway Timeout error. After this update, you can configure timeout in ingress annotations. ( TRACING-4238 ) 1.3.3.4. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.4. Release notes for Red Hat OpenShift distributed tracing platform 3.3.1 The Red Hat OpenShift distributed tracing platform 3.3.1 is a maintenance release with no changes because the Red Hat OpenShift distributed tracing platform is bundled with the Red Hat build of OpenTelemetry that is released with a bug fix. This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.4.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. The Red Hat OpenShift distributed tracing platform (Tempo) 3.3.1 is based on the open source Grafana Tempo 2.5.0. 1.4.1.1. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.4.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3.1 is based on the open source Jaeger release 1.57.0. Important Jaeger does not use FIPS validated cryptographic modules. 1.4.2.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3.1 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.4.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.3.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. Users must migrate to the Tempo Operator and the Red Hat build of OpenTelemetry for distributed tracing collection and storage. 1.4.2.3. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.5. Release notes for Red Hat OpenShift distributed tracing platform 3.3 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.5.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. The Red Hat OpenShift distributed tracing platform (Tempo) 3.3 is based on the open source Grafana Tempo 2.5.0. 1.5.1.1. New features and enhancements This update introduces the following enhancements: Support for securing the Jaeger UI and Jaeger APIs with the OpenShift OAuth Proxy. ( TRACING-4108 ) Support for using the service serving certificates, which are generated by OpenShift Container Platform, on ingestion APIs when multitenancy is disabled. ( TRACING-3954 ) Support for ingesting by using the OTLP/HTTP protocol when multitenancy is enabled. ( TRACING-4171 ) Support for the AWS S3 Secure Token authentication. ( TRACING-4176 ) Support for automatically reloading certificates. ( TRACING-4185 ) Support for configuring the duration for which service names are available for querying. ( TRACING-4214 ) 1.5.1.2. Bug fixes This update introduces the following bug fixes: Before this update, storage certificate names did not support dots. With this update, storage certificate name can contain dots. ( TRACING-4348 ) Before this update, some users had to select a certificate when accessing the gateway route. With this update, there is no prompt to select a certificate. ( TRACING-4431 ) Before this update, the gateway component was not scalable. With this update, the gateway component is scalable. ( TRACING-4497 ) Before this update the Jaeger UI might fail with the 504 Gateway Time-out error when accessed via a route. With this update, users can specify route annotations for increasing timeout, such as haproxy.router.openshift.io/timeout: 3m , when querying large data sets. ( TRACING-4511 ) 1.5.1.3. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.5.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3 is based on the open source Jaeger release 1.57.0. Important Jaeger does not use FIPS validated cryptographic modules. 1.5.2.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.5.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.3, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. Users must migrate to the Tempo Operator and the Red Hat build of OpenTelemetry for distributed tracing collection and storage. 1.5.2.3. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.6. Release notes for Red Hat OpenShift distributed tracing platform 3.2.2 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.6.1. CVEs This release fixes the following CVEs: CVE-2023-2953 CVE-2024-28182 1.6.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.6.2.1. Bug fixes This update introduces the following bug fix: Before this update, secrets were perpetually generated on OpenShift Container Platform 4.16 because the operator tried to reconcile a new openshift.io/internal-registry-pull-secret-ref annotation for service accounts, causing a loop. With this update, the operator ignores this new annotation. ( TRACING-4434 ) 1.6.2.2. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.6.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.6.3.1. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.7. Release notes for Red Hat OpenShift distributed tracing platform 3.2.1 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.7.1. CVEs This release fixes CVE-2024-25062 . 1.7.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.7.2.1. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.7.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.7.3.1. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.8. Release notes for Red Hat OpenShift distributed tracing platform 3.2 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.8.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.8.1.1. Technology Preview features This update introduces the following Technology Preview feature: Support for the Tempo monolithic deployment. Important The Tempo monolithic deployment is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.8.1.2. New features and enhancements This update introduces the following enhancements: Red Hat OpenShift distributed tracing platform (Tempo) 3.2 is based on the open source Grafana Tempo 2.4.1. Allowing the overriding of resources per component. 1.8.1.3. Bug fixes This update introduces the following bug fixes: Before this update, the Jaeger UI only displayed services that sent traces in the 15 minutes. With this update, the availability of the service and operation names can be configured by using the following field: spec.template.queryFrontend.jaegerQuery.servicesQueryDuration . ( TRACING-3139 ) Before this update, the query-frontend pod might get stopped when out-of-memory (OOM) as a result of searching a large trace. With this update, resource limits can be set to prevent this issue. ( TRACING-4009 ) 1.8.1.4. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.8.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.8.2.1. Support for OpenShift Elasticsearch Operator Red Hat OpenShift distributed tracing platform (Jaeger) 3.2 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.8.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.2, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Tempo Operator and the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. Users must adopt the OpenTelemetry and Tempo distributed tracing stack because it is the stack to be enhanced going forward. In the Red Hat OpenShift distributed tracing platform 3.2, the Jaeger agent is deprecated and planned to be removed in the following release. Red Hat will provide bug fixes and support for the Jaeger agent during the current release lifecycle, but the Jaeger agent will no longer receive enhancements and will be removed. The OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry is the preferred Operator for injecting the trace collector agent. 1.8.2.3. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Red Hat OpenShift distributed tracing platform (Jaeger) 3.2 is based on the open source Jaeger release 1.57.0. 1.8.2.4. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.9. Release notes for Red Hat OpenShift distributed tracing platform 3.1.1 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.9.1. CVEs This release fixes CVE-2023-39326 . 1.9.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.9.2.1. Known issues There are currently known issues: Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.9.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.9.3.1. Support for OpenShift Elasticsearch Operator Red Hat OpenShift distributed tracing platform (Jaeger) 3.1.1 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.9.3.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.1.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In the Red Hat OpenShift distributed tracing platform 3.1.1, Tempo provided by the Tempo Operator and the OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.9.3.3. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.10. Release notes for Red Hat OpenShift distributed tracing platform 3.1 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.10.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.10.1.1. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Tempo): Red Hat OpenShift distributed tracing platform (Tempo) 3.1 is based on the open source Grafana Tempo 2.3.1. Support for cluster-wide proxy environments. Support for TraceQL to Gateway component. 1.10.1.2. Bug fixes This update introduces the following bug fixes for the distributed tracing platform (Tempo): Before this update, when a TempoStack instance was created with the monitorTab enabled in OpenShift Container Platform 4.15, the required tempo-redmetrics-cluster-monitoring-view ClusterRoleBinding was not created. This update resolves the issue by fixing the Operator RBAC for the monitor tab when the Operator is deployed in an arbitrary namespace. ( TRACING-3786 ) Before this update, when a TempoStack instance was created on an OpenShift Container Platform cluster with only an IPv6 networking stack, the compactor and ingestor pods ran in the CrashLoopBackOff state, resulting in multiple errors. This update provides support for IPv6 clusters.( TRACING-3226 ) 1.10.1.3. Known issues There are currently known issues: Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.10.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.10.2.1. Support for OpenShift Elasticsearch Operator Red Hat OpenShift distributed tracing platform (Jaeger) 3.1 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.10.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In the Red Hat OpenShift distributed tracing platform 3.1, Tempo provided by the Tempo Operator and the OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.10.2.3. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Red Hat OpenShift distributed tracing platform (Jaeger) 3.1 is based on the open source Jaeger release 1.53.0. 1.10.2.4. Bug fixes This update introduces the following bug fix for the distributed tracing platform (Jaeger): Before this update, the connection target URL for the jaeger-agent container in the jager-query pod was overwritten with another namespace URL in OpenShift Container Platform 4.13. This was caused by a bug in the sidecar injection code in the jaeger-operator , causing nondeterministic jaeger-agent injection. With this update, the Operator prioritizes the Jaeger instance from the same namespace as the target deployment. ( TRACING-3722 ) 1.10.2.5. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.11. Release notes for Red Hat OpenShift distributed tracing platform 3.0 1.11.1. Component versions in the Red Hat OpenShift distributed tracing platform 3.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.51.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.3.0 1.11.2. Red Hat OpenShift distributed tracing platform (Jaeger) 1.11.2.1. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.0, Jaeger and support for Elasticsearch are deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In the Red Hat OpenShift distributed tracing platform 3.0, Tempo provided by the Tempo Operator and the OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.11.2.2. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Support for the ARM architecture. Support for cluster-wide proxy environments. 1.11.2.3. Bug fixes This update introduces the following bug fix for the distributed tracing platform (Jaeger): Before this update, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator used other images than relatedImages . This caused the ImagePullBackOff error in disconnected network environments when launching the jaeger pod because the oc adm catalog mirror command mirrors images specified in relatedImages . This update provides support for disconnected environments when using the oc adm catalog mirror CLI command. ( TRACING-3546 ) 1.11.2.4. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.11.3. Red Hat OpenShift distributed tracing platform (Tempo) 1.11.3.1. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Tempo): Support for the ARM architecture. Support for span request count, duration, and error count (RED) metrics. The metrics can be visualized in the Jaeger console deployed as part of Tempo or in the web console in the Observe menu. 1.11.3.2. Bug fixes This update introduces the following bug fixes for the distributed tracing platform (Tempo): Before this update, the TempoStack CRD was not accepting custom CA certificate despite the option to choose CA certificates. This update fixes support for the custom TLS CA option for connecting to object storage. ( TRACING-3462 ) Before this update, when mirroring the Red Hat OpenShift distributed tracing platform Operator images to a mirror registry for use in a disconnected cluster, the related Operator images for tempo , tempo-gateway , opa-openshift , and tempo-query were not mirrored. This update fixes support for disconnected environments when using the oc adm catalog mirror CLI command. ( TRACING-3523 ) Before this update, the query frontend service of the Red Hat OpenShift distributed tracing platform was using internal mTLS when gateway was not deployed. This caused endpoint failure errors. This update fixes mTLS when Gateway is not deployed. ( TRACING-3510 ) 1.11.3.3. Known issues There are currently known issues: Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.12. Release notes for Red Hat OpenShift distributed tracing platform 2.9.2 1.12.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.9.2 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.12.2. CVEs This release fixes CVE-2023-46234 . 1.12.3. Red Hat OpenShift distributed tracing platform (Jaeger) 1.12.3.1. Known issues There are currently known issues: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on the IBM Z and IBM Power architectures. 1.12.4. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.12.4.1. Known issues There are currently known issues: Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the Operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.13. Release notes for Red Hat OpenShift distributed tracing platform 2.9.1 1.13.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.9.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.13.2. CVEs This release fixes CVE-2023-44487 . 1.13.3. Red Hat OpenShift distributed tracing platform (Jaeger) 1.13.3.1. Known issues There are currently known issues: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on the IBM Z and IBM Power architectures. 1.13.4. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.13.4.1. Known issues There are currently known issues: Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the Operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.14. Release notes for Red Hat OpenShift distributed tracing platform 2.9 1.14.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.9 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.14.2. Red Hat OpenShift distributed tracing platform (Jaeger) 1.14.2.1. Bug fixes Before this update, connection was refused due to a missing gRPC port on the jaeger-query deployment. This issue resulted in transport: Error while dialing: dial tcp :16685: connect: connection refused error message. With this update, the Jaeger Query gRPC port (16685) is successfully exposed on the Jaeger Query service. ( TRACING-3322 ) Before this update, the wrong port was exposed for jaeger-production-query , resulting in refused connection. With this update, the issue is fixed by exposing the Jaeger Query gRPC port (16685) on the Jaeger Query deployment. ( TRACING-2968 ) Before this update, when deploying Service Mesh on single-node OpenShift clusters in disconnected environments, the Jaeger pod frequently went into the Pending state. With this update, the issue is fixed. ( TRACING-3312 ) Before this update, the Jaeger Operator pod restarted with the default memory value due to the reason: OOMKilled error message. With this update, this issue is fixed by removing the resource limits. ( TRACING-3173 ) 1.14.2.2. Known issues There are currently known issues: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on the IBM Z and IBM Power architectures. 1.14.3. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.14.3.1. New features and enhancements This release introduces the following enhancements for the distributed tracing platform (Tempo): Support the operator maturity Level IV, Deep Insights, which enables upgrading, monitoring, and alerting of the TempoStack instances and the Tempo Operator. Add Ingress and Route configuration for the Gateway. Support the managed and unmanaged states in the TempoStack custom resource. Expose the following additional ingestion protocols in the Distributor service: Jaeger Thrift binary, Jaeger Thrift compact, Jaeger gRPC, and Zipkin. When the Gateway is enabled, only the OpenTelemetry protocol (OTLP) gRPC is enabled. Expose the Jaeger Query gRPC endpoint on the Query Frontend service. Support multitenancy without Gateway authentication and authorization. 1.14.3.2. Bug fixes Before this update, the Tempo Operator was not compatible with disconnected environments. With this update, the Tempo Operator supports disconnected environments. ( TRACING-3145 ) Before this update, the Tempo Operator with TLS failed to start on OpenShift Container Platform. With this update, the mTLS communication is enabled between Tempo components, the Operand starts successfully, and the Jaeger UI is accessible. ( TRACING-3091 ) Before this update, the resource limits from the Tempo Operator caused error messages such as reason: OOMKilled . With this update, the resource limits for the Tempo Operator are removed to avoid such errors. ( TRACING-3204 ) 1.14.3.3. Known issues There are currently known issues: Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the Operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.15. Release notes for Red Hat OpenShift distributed tracing platform 2.8 1.15.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.8 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.42 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 0.1.0 1.15.2. Technology Preview features This release introduces support for the Red Hat OpenShift distributed tracing platform (Tempo) as a Technology Preview feature for Red Hat OpenShift distributed tracing platform. Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The feature uses version 0.1.0 of the Red Hat OpenShift distributed tracing platform (Tempo) and version 2.0.1 of the upstream distributed tracing platform (Tempo) components. You can use the distributed tracing platform (Tempo) to replace Jaeger so that you can use S3-compatible storage instead of ElasticSearch. Most users who use the distributed tracing platform (Tempo) instead of Jaeger will not notice any difference in functionality because the distributed tracing platform (Tempo) supports the same ingestion and query protocols as Jaeger and uses the same user interface. If you enable this Technology Preview feature, note the following limitations of the current implementation: The distributed tracing platform (Tempo) currently does not support disconnected installations. ( TRACING-3145 ) When you use the Jaeger user interface (UI) with the distributed tracing platform (Tempo), the Jaeger UI lists only services that have sent traces within the last 15 minutes. For services that have not sent traces within the last 15 minutes, those traces are still stored even though they are not visible in the Jaeger UI. ( TRACING-3139 ) Expanded support for the Tempo Operator is planned for future releases of the Red Hat OpenShift distributed tracing platform. Possible additional features might include support for TLS authentication, multitenancy, and multiple clusters. For more information about the Tempo Operator, see the Tempo community documentation . 1.15.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.16. Release notes for Red Hat OpenShift distributed tracing platform 2.7 1.16.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.7 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.39 1.16.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.17. Release notes for Red Hat OpenShift distributed tracing platform 2.6 1.17.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.6 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.38 1.17.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.18. Release notes for Red Hat OpenShift distributed tracing platform 2.5 1.18.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.5 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.36 1.18.2. New features and enhancements This release introduces support for ingesting OpenTelemetry protocol (OTLP) to the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. The Operator now automatically enables the OTLP ports: Port 4317 for the OTLP gRPC protocol. Port 4318 for the OTLP HTTP protocol. This release also adds support for collecting Kubernetes resource attributes to the Red Hat build of OpenTelemetry Operator. 1.18.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.19. Release notes for Red Hat OpenShift distributed tracing platform 2.4 1.19.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.4 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.34.1 1.19.2. New features and enhancements This release adds support for auto-provisioning certificates using the OpenShift Elasticsearch Operator. Self-provisioning by using the Red Hat OpenShift distributed tracing platform (Jaeger) Operator to call the OpenShift Elasticsearch Operator during installation. + Important When upgrading to the Red Hat OpenShift distributed tracing platform 2.4, the Operator recreates the Elasticsearch instance, which might take five to ten minutes. Distributed tracing will be down and unavailable for that period. 1.19.3. Technology Preview features Creating the Elasticsearch instance and certificates first and then configuring the distributed tracing platform (Jaeger) to use the certificate is a Technology Preview for this release. 1.19.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.20. Release notes for Red Hat OpenShift distributed tracing platform 2.3 1.20.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.3.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.30.2 1.20.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.3.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.30.1 1.20.3. New features and enhancements With this release, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator is now installed to the openshift-distributed-tracing namespace by default. Before this update, the default installation had been in the openshift-operators namespace. 1.20.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.21. Release notes for Red Hat OpenShift distributed tracing platform 2.2 1.21.1. Technology Preview features The unsupported OpenTelemetry Collector components included in the 2.1 release are removed. 1.21.2. Bug fixes This release of the Red Hat OpenShift distributed tracing platform addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.22. Release notes for Red Hat OpenShift distributed tracing platform 2.1 1.22.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.29.1 1.22.2. Technology Preview features This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the ca_file moves under tls in the custom resource, as shown in the following examples. CA file configuration for OpenTelemetry version 0.33 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" CA file configuration for OpenTelemetry version 0.41.1 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" 1.22.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.23. Release notes for Red Hat OpenShift distributed tracing platform 2.0 1.23.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.28.0 1.23.2. New features and enhancements This release introduces the following new features and enhancements: Rebrands Red Hat OpenShift Jaeger as the Red Hat OpenShift distributed tracing platform. Updates Red Hat OpenShift distributed tracing platform (Jaeger) Operator to Jaeger 1.28. Going forward, the Red Hat OpenShift distributed tracing platform will only support the stable Operator channel. Channels for individual releases are no longer supported. Adds support for OpenTelemetry protocol (OTLP) to the Query service. Introduces a new distributed tracing icon that appears in the OperatorHub. Includes rolling updates to the documentation to support the name change and new features. 1.23.3. Technology Preview features This release adds the Red Hat build of OpenTelemetry as a Technology Preview , which you install using the Red Hat build of OpenTelemetry Operator. Red Hat build of OpenTelemetry is based on the OpenTelemetry APIs and instrumentation. The Red Hat build of OpenTelemetry includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the Red Hat OpenShift distributed tracing platform. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling. 1.23.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.24. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.25. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | [
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\""
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/distributed_tracing/distr-tracing-rn |
Chapter 38. PodTemplate schema reference | Chapter 38. PodTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , JmxTransTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate , KafkaNodePoolTemplate , ZookeeperClusterTemplate Full list of PodTemplate schema properties Configures the template for Kafka pods. Example PodTemplate configuration # ... template: pod: metadata: labels: label1: value1 annotations: anno1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 # ... 38.1. hostAliases Use the hostAliases property to a specify a list of hosts and IP addresses, which are injected into the /etc/hosts file of the pod. This configuration is especially useful for Kafka Connect or MirrorMaker when a connection outside of the cluster is also requested by users. Example hostAliases configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect #... spec: # ... template: pod: hostAliases: - ip: "192.168.1.86" hostnames: - "my-host-1" - "my-host-2" #... 38.2. PodTemplate schema properties Property Description metadata Metadata applied to the resource. MetadataTemplate imagePullSecrets List of references to secrets in the same namespace to use for pulling any of the images used by this Pod. When the STRIMZI_IMAGE_PULL_SECRETS environment variable in Cluster Operator and the imagePullSecrets option are specified, only the imagePullSecrets variable is used and the STRIMZI_IMAGE_PULL_SECRETS variable is ignored. For more information, see the external documentation for core/v1 localobjectreference . LocalObjectReference array securityContext Configures pod-level security attributes and common container settings. For more information, see the external documentation for core/v1 podsecuritycontext . PodSecurityContext terminationGracePeriodSeconds The grace period is the duration in seconds after the processes running in the pod are sent a termination signal, and the time when the processes are forcibly halted with a kill signal. Set this value to longer than the expected cleanup time for your process. Value must be a non-negative integer. A zero value indicates delete immediately. You might need to increase the grace period for very large Kafka clusters, so that the Kafka brokers have enough time to transfer their work to another broker before they are terminated. Defaults to 30 seconds. integer affinity The pod's affinity rules. For more information, see the external documentation for core/v1 affinity . Affinity tolerations The pod's tolerations. For more information, see the external documentation for core/v1 toleration . Toleration array priorityClassName The name of the priority class used to assign priority to the pods. For more information about priority classes, see Pod Priority and Preemption . string schedulerName The name of the scheduler used to dispatch this Pod . If not specified, the default scheduler will be used. string hostAliases The pod's HostAliases. HostAliases is an optional list of hosts and IPs that will be injected into the Pod's hosts file if specified. For more information, see the external documentation for core/v1 hostalias . HostAlias array tmpDirSizeLimit Defines the total amount (for example 1Gi ) of local storage required for temporary EmptyDir volume ( /tmp ). Default value is 5Mi . string enableServiceLinks Indicates whether information about services should be injected into Pod's environment variables. boolean topologySpreadConstraints The pod's topology spread constraints. For more information, see the external documentation for core/v1 topologyspreadconstraint . TopologySpreadConstraint array | [
"template: pod: metadata: labels: label1: value1 annotations: anno1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect # spec: # template: pod: hostAliases: - ip: \"192.168.1.86\" hostnames: - \"my-host-1\" - \"my-host-2\" #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-PodTemplate-reference |
7.220. udev | 7.220. udev 7.220.1. RHBA-2015:1382 - udev bug fix update Updated udev packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The udev packages implement a dynamic device-directory, providing only the devices present on the system. This dynamic directory runs in user space, dynamically creates and removes devices, provides consistent naming, and a user-space API. The udev packages replace the devfs package and provides better hot-plug functionality. Bug Fixes BZ# 1164960 An earlier update was made to increase the amount of udev workers when some workers were stuck during network module loading, but an incorrect semaphore counter was used. As a consequence, the amount of workers was not increased, and if all workers were busy, timeouts could occur and some events were not correctly processed. With this update, the correct semaphore counter is used, and the amount of available workers now increases as expected. BZ# 1130438 The udev tool did not run the ata_id helper for ATA/ATAPI devices (SPC-3 or later) using the SCSI subsystem. Consequently, those devices, mostly DVD and CD drives, had no ID_SERIAL entry in the udev database and therefore no symbolic link in the /dev/disk/by-id/ directory. With this update, udev calls the ata_id helper on those devices, and the symbolic link in /dev/disk/by-id/ is now present as expected. BZ# 907687 The information displayed for SAS drives in the /dev/disk/by-path/ directory was not a "path" reference, but an "id" reference. Consequently, the symbolic link for SAS drives in /dev/disk/by-path/ changed if the "id" of a component changed. The original scheme uses the disk's SAS address and LUN, and the new scheme introduced by this update uses the SAS address of the nearest expander (if available) and the PHY ID number of the connection. For compatibility reasons, the old symbolic link still exists and a new ID_SAS_PATH environment variable determines a new symbolic link. BZ# 1084513 The udev rules that load a kernel module for a device worked only if the device did not have a driver already, and some modules were not loaded despite being needed. Now, the udev rule no longer checks for the driver. BZ# 1140336 Previously, udev was extended to set the firmware timeout from 60 seconds to 10 minutes to prevent firmware loading timeouts. However, in the early boot phase, the file that is supposed to set this timeout is not present yet. Consequently, an error message was displayed, informing that the /sys/class/firmware timeout file does not exist. Now, udev no longer displays an error message in the described situation. BZ# 1018171 If udev processed the uevent queue for a device that was already removed, the internal handling failed to process an already removed device. Consequently, some symbolic links were not removed for these devices. Now, udev no longer relies on the existence of a device when dealing with the backlog of the uevent queue, and all symbolic links are removed as expected. BZ# 876535 If "udevlog" is specified on the kernel command line to debug udev, all udev logs are stored in the /dev/.udev/udev.log file. Running a system with the udev debug log turned on and using "udevlog" on the kernel command line for an extended period of time could cause /dev/.udev/udev.log to become very large and the devtmpfs mounted on /dev to become full. Consequently, if /dev became full, no new symbolic links and device nodes could be included. With this update, start_udev contains a verbose warning message describing the possibility. BZ# 794561 The ata_id helper of udev did not swap all bytes of the firmware revision information. As a consequence, the firmware revision information of ATA disks stored in the udev database had its last two digits swapped. The ata_id helper has been modified to also swap the last two characters of the firmware revision, and the firmware revision information of ATA disks is now correct. Users of udev are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-udev |
5.9.2.4. MSDOS | 5.9.2.4. MSDOS Red Hat Enterprise Linux also supports file systems from other operating systems. As the name for the msdos file system implies, the original operating system supporting this file system was Microsoft's MS-DOS (R). As in MS-DOS, a Red Hat Enterprise Linux system accessing an msdos file system is limited to 8.3 file names. Likewise, other file attributes such as permissions and ownership cannot be changed. However, from a file interchange standpoint, the msdos file system is more than sufficient to get the job done. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-storage-fs-msdos |
4.3.2. Creating User Passwords Within an Organization | 4.3.2. Creating User Passwords Within an Organization If there are a significant number of users within an organization, the system administrators have two basic options available to force the use of good passwords. They can create passwords for the user, or they can let users create their own passwords, while verifying the passwords are of acceptable quality. Creating the passwords for the users ensures that the passwords are good, but it becomes a daunting task as the organization grows. It also increases the risk of users writing their passwords down. For these reasons, most system administrators prefer to have the users create their own passwords, but actively verify that the passwords are good and, in some cases, force users to change their passwords periodically through password aging. 4.3.2.1. Forcing Strong Passwords To protect the network from intrusion it is a good idea for system administrators to verify that the passwords used within an organization are strong ones. When users are asked to create or change passwords, they can use the command line application passwd , which is Pluggable Authentication Manager ( PAM ) aware and therefore checks to see if the password is easy to crack or too short in length via the pam_cracklib.so PAM module. Since PAM is customizable, it is possible to add further password integrity checkers, such as pam_passwdqc (available from http://www.openwall.com/passwdqc/ ) or to write a new module. For a list of available PAM modules, refer to http://www.kernel.org/pub/linux/libs/pam/modules.html . For more information about PAM, refer to the chapter titled Pluggable Authentication Modules (PAM) in the Reference Guide . It should be noted, however, that the check performed on passwords at the time of their creation does not discover bad passwords as effectively as running a password cracking program against the passwords within the organization. There are many password cracking programs that run under Red Hat Enterprise Linux although none ship with the operating system. Below is a brief list of some of the more popular password cracking programs: Note None of these tools are supplied with Red Hat Enterprise Linux and are therefore not supported by Red Hat, Inc in any way. John The Ripper - A fast and flexible password cracking program. It allows the use of multiple word lists and is capable of brute-force password cracking. It is available online at http://www.openwall.com/john/ . Crack - Perhaps the most well known password cracking software, Crack is also very fast, though not as easy to use as John The Ripper . It can be found online at http://www.crypticide.com/users/alecm/ . Slurpie - Slurpie is similar to John The Ripper and Crack , but it is designed to run on multiple computers simultaneously, creating a distributed password cracking attack. It can be found along with a number of other distributed attack security evaluation tools online at http://www.ussrback.com/distributed.htm . Warning Always get authorization in writing before attempting to crack passwords within an organization. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-wstation-pass-org-create |
Install | Install Red Hat Advanced Cluster Management for Kubernetes 2.12 Installation | [
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name:schedule-acm namespace:open-cluster-management-backup spec: veleroSchedule:0 */1 * * * veleroTtl:120h",
"-n openshift-console get route",
"openshift-console console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace>",
"create namespace <namespace>",
"project <namespace>",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <default> 1 namespace: <namespace> 2 spec: targetNamespaces: - <namespace>",
"apply -f <path-to-file>/<operator-group>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: acm-operator-subscription spec: sourceNamespace: openshift-marketplace source: redhat-operators channel: release-<2.x> installPlanApproval: Automatic name: advanced-cluster-management",
"apply -f <path-to-file>/<subscription>.yaml",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: {}",
"apply -f <path-to-file>/<custom-resource>.yaml",
"error: unable to recognize \"./mch.yaml\": no matches for kind \"MultiClusterHub\" in version \"operator.open-cluster-management.io/v1\"",
"get mch -o yaml",
"metadata: labels: node-role.kubernetes.io/infra: \"\" spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/infra",
"spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists",
"spec: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"-n openshift-console get route",
"openshift-console console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.x -p advanced-cluster-management,multicluster-engine -t myregistry.example.com:5000/mirror/my-operator-index:v4.x",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: myregistry.example.com:5000/mirror/oc-mirror-metadata mirror: platform: channels: - name: stable-4.x type: ocp operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 packages: - name: advanced-cluster-management - name: multicluster-engine additionalImages: [] helm: {}",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-mirror-catalog-source namespace: openshift-marketplace spec: image: myregistry.example.com:5000/mirror/my-operator-index:v4.x sourceType: grpc",
"-n openshift-marketplace get packagemanifests",
"replace -f ./<path>/imageContentSourcePolicy.yaml",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: labels: operators.openshift.org/catalog: \"true\" name: operator-0 spec: repositoryDigestMirrors: - mirrors: - myregistry.example.com:5000/rhacm2 source: registry.redhat.io/rhacm2 - mirrors: - myregistry.example.com:5000/multicluster-engine source: registry.redhat.io/multicluster-engine - mirrors: - myregistry.example.com:5000/openshift4 source: registry.redhat.io/openshift4 - mirrors: - myregistry.example.com:5000/redhat source: registry.redhat.io/redhat",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: namespace: open-cluster-management name: hub annotations: installer.open-cluster-management.io/mce-subscription-spec: '{\"source\": \"my-mirror-catalog-source\"}' spec: {}",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-operators namespace: openshift-marketplace spec: displayName: Red Hat Operators priority: -100",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> 1 spec: overrides: components: - name: <name> 2 enabled: true",
"patch MultiClusterHub multiclusterhub -n <namespace> --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/overrides/components/-\",\"value\":{\"name\":\"<name>\",\"enabled\":true}}]'",
"create secret generic <secret> -n <namespace> --from-file=.dockerconfigjson=<path-to-pull-secret> --type=kubernetes.io/dockerconfigjson",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: imagePullSecret: <secret>",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: availabilityConfig: \"Basic\"",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: disableHubSelfManagement: true",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: disableUpdateClusterImageSets: true",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: customCAConfigmap: <configmap>",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: ingress: sslCiphers: - \"ECDHE-ECDSA-AES128-GCM-SHA256\" - \"ECDHE-RSA-AES128-GCM-SHA256\"",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: overrides: components: - name: cluster-backup enabled: true",
"patch MultiClusterHub multiclusterhub -n <namespace> --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/overrides/components/-\",\"value\":{\"name\":\"cluster-backup\",\"enabled\":true}}]'",
"get mch",
"metadata: annotations: installer.open-cluster-management.io/mce-subscription-spec: '{\"source\": \"<my-mirror-catalog-source>\"}'",
"delete discoveryconfigs --all --all-namespaces",
"delete agentserviceconfig --all",
"delete mco observability",
"project <namespace>",
"delete multiclusterhub --all",
"get mch -o yaml",
"get csv NAME DISPLAY VERSION REPLACES PHASE advanced-cluster-management.v2.x.0 Advanced Cluster Management for Kubernetes 2.x.0 Succeeded delete clusterserviceversion advanced-cluster-management.v2.x.0 get sub NAME PACKAGE SOURCE CHANNEL acm-operator-subscription advanced-cluster-management acm-custom-registry release-2.x delete sub acm-operator-subscription",
"ACM_NAMESPACE=<namespace> delete mch --all -n USDACM_NAMESPACE delete apiservice v1.admission.cluster.open-cluster-management.io v1.admission.work.open-cluster-management.io delete clusterimageset --all delete clusterrole multiclusterengines.multicluster.openshift.io-v1-admin multiclusterengines.multicluster.openshift.io-v1-crdview multiclusterengines.multicluster.openshift.io-v1-edit multiclusterengines.multicluster.openshift.io-v1-view open-cluster-management:addons:application-manager open-cluster-management:admin-aggregate open-cluster-management:cert-policy-controller-hub open-cluster-management:cluster-manager-admin-aggregate open-cluster-management:config-policy-controller-hub open-cluster-management:edit-aggregate open-cluster-management:policy-framework-hub open-cluster-management:view-aggregate delete crd klusterletaddonconfigs.agent.open-cluster-management.io placementbindings.policy.open-cluster-management.io policies.policy.open-cluster-management.io userpreferences.console.open-cluster-management.io discoveredclusters.discovery.open-cluster-management.io discoveryconfigs.discovery.open-cluster-management.io delete mutatingwebhookconfiguration ocm-mutating-webhook managedclustermutators.admission.cluster.open-cluster-management.io multicluster-observability-operator delete validatingwebhookconfiguration channels.apps.open.cluster.management.webhook.validator application-webhook-validator multiclusterhub-operator-validating-webhook ocm-validating-webhook multicluster-observability-operator multiclusterengines.multicluster.openshift.io"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/install/index |
Chapter 22. ssh | Chapter 22. ssh 22.1. ssh:ssh 22.1.1. Description Connects to a remote SSH server 22.1.2. Syntax ssh:ssh [options] hostname [command] 22.1.3. Arguments Name Description hostname The host name to connect to via SSH command Optional command to execute 22.1.4. Options Name Description --help Display this help message -P, --password The password for remote login -p, --port The port to use for SSH connection -q Quiet Mode. Do not ask for confirmations -r, --retries retry connection establishment (up to attempts times) -l, --username The user name for remote login -k, --keyfile The private keyFile location when using key login, need have BouncyCastle registered as security provider using this flag 22.2. ssh:sshd 22.2.1. Description Creates a SSH server 22.2.2. Syntax ssh:sshd [options] 22.2.3. Options Name Description --help Display this help message -b, --background The service will run in the background -p, --port The port to setup the SSH server -i, --idle-timeout The session idle timeout in milliseconds -w, --welcome-banner The welcome banner to display when logging in -n, --nio-workers The number of NIO worker threads to use | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_console_reference/ssh |
Chapter 2. Image [image.openshift.io/v1] | Chapter 2. Image [image.openshift.io/v1] Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 2.1.1. .dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 2.1.2. .dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 2.1.3. .dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 2.1.4. .dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 2.1.5. .signatures Description Signatures holds all signatures of the image. Type array 2.1.6. .signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 2.1.7. .signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 2.1.8. .signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 2.1.9. .signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 2.1.10. .signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 2.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/images DELETE : delete collection of Image GET : list or watch objects of kind Image POST : create an Image /apis/image.openshift.io/v1/watch/images GET : watch individual changes to a list of Image. deprecated: use the 'watch' parameter with a list operation instead. /apis/image.openshift.io/v1/images/{name} DELETE : delete an Image GET : read the specified Image PATCH : partially update the specified Image PUT : replace the specified Image /apis/image.openshift.io/v1/watch/images/{name} GET : watch changes to an object of kind Image. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/image.openshift.io/v1/images Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Image Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Image Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK ImageList schema 401 - Unauthorized Empty HTTP method POST Description create an Image Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body Image schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 202 - Accepted Image schema 401 - Unauthorized Empty 2.2.2. /apis/image.openshift.io/v1/watch/images Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Image. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/image.openshift.io/v1/images/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the Image Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Image Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Image Table 2.17. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Image Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Image Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body Image schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty 2.2.4. /apis/image.openshift.io/v1/watch/images/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the Image Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Image. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/image_apis/image-image-openshift-io-v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.