title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 2. An active/passive Apache HTTP Server in a Red Hat High Availability Cluster
Chapter 2. An active/passive Apache HTTP Server in a Red Hat High Availability Cluster This chapter describes how to configure an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster using pcs to configure cluster resources. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption. Figure 2.1, "Apache in a Red Hat High Availability Two-Node Cluster" shows a high-level overview of the cluster. The cluster is a two-node Red Hat High Availability cluster which is configured with a network power switch and with shared storage. The cluster nodes are connected to a public network, for client access to the Apache HTTP server through a virtual IP. The Apache server runs on either Node 1 or Node 2, each of which has access to the storage on which the Apache data is kept. Figure 2.1. Apache in a Red Hat High Availability Two-Node Cluster This use case requires that your system include the following components: A two-node Red Hat High Availability cluster with power fencing configured for each node. This procedure uses the cluster example provided in Chapter 1, Creating a Red Hat High-Availability Cluster with Pacemaker . A public virtual IP address, required for Apache. Shared storage for the nodes in the cluster, using iSCSI, Fibre Channel, or other shared network block device. The cluster is configured with an Apache resource group, which contains the cluster components that the web server requires: an LVM resource, a file system resource, an IP address resource, and a web server resource. This resource group can fail over from one node of the cluster to the other, allowing either node to run the web server. Before creating the resource group for this cluster, you will perform the following procedures: Configure an ext4 file system mounted on the logical volume my_lv , as described in Section 2.1, "Configuring an LVM Volume with an ext4 File System" . Configure a web server, as described in Section 2.2, "Web Server Configuration" . Ensure that only the cluster is capable of activating the volume group that contains my_lv , and that the volume group will not be activated outside of the cluster on startup, as described in Section 2.3, "Exclusive Activation of a Volume Group in a Cluster" . After performing these procedures, you create the resource group and the resources it contains, as described in Section 2.4, "Creating the Resources and Resource Groups with the pcs Command" . 2.1. Configuring an LVM Volume with an ext4 File System This use case requires that you create an LVM logical volume on storage that is shared between the nodes of the cluster. The following procedure creates an LVM logical volume and then creates an ext4 file system on that volume. In this example, the shared partition /dev/sdb1 is used to store the LVM physical volume from which the LVM logical volume will be created. Note LVM volumes and the corresponding partitions and devices used by cluster nodes must be connected to the cluster nodes only. Since the /dev/sdb1 partition is storage that is shared, you perform this procedure on one node only, Create an LVM physical volume on partition /dev/sdb1 . Create the volume group my_vg that consists of the physical volume /dev/sdb1 . Create a logical volume using the volume group my_vg . You can use the lvs command to display the logical volume. Create an ext4 file system on the logical volume my_lv .
[ "pvcreate /dev/sdb1 Physical volume \"/dev/sdb1\" successfully created", "vgcreate my_vg /dev/sdb1 Volume group \"my_vg\" successfully created", "lvcreate -L450 -n my_lv my_vg Rounding up size to full physical extent 452.00 MiB Logical volume \"my_lv\" created", "lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my_lv my_vg -wi-a---- 452.00m", "mkfs.ext4 /dev/my_vg/my_lv mke2fs 1.42.7 (21-Jan-2013) Filesystem label= OS type: Linux" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-service-HAAA
17.9. Troubleshooting issues in the Red Hat Gluster Storage Trusted Storage Pool
17.9. Troubleshooting issues in the Red Hat Gluster Storage Trusted Storage Pool 17.9.1. Troubleshooting a network issue in the Red Hat Gluster Storage Trusted Storage Pool When enabling the network components to communicate with Jumbo frames in a Red Hat Gluster Storage Trusted Storage Pool, ensure that all the network components such as switches, Red Hat Gluster Storage nodes etc are configured properly. Verify the network configuration by running the ping from one Red Hat Gluster Storage node to another. If the nodes in the Red Hat Gluster Storage Trusted Storage Pool or any other network components are not configured to fully support Jumbo frames, the ping command times out and displays the following error:
[ "ping -s 1600 '-Mdo'IP_ADDRESS local error: Message too long, mtu=1500" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-troubleshooting_issues_in_the_red_hat_storage_trusted_storage_pool
Chapter 4. Installing a cluster on Azure Stack Hub with network customizations
Chapter 4. Installing a cluster on Azure Stack Hub with network customizations In OpenShift Container Platform version 4.16, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Azure Stack Hub. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. Note While you can select azure when using the installation program to deploy a cluster using installer-provisioned infrastructure, this option is only supported for the Azure Public Cloud. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You verified that you have approximately 16 GB of local disk space. Installing the cluster requires that you download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Decompressing the VHD files requires this amount of local disk space. 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Uploading the RHCOS cluster image You must download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Prerequisites Configure an Azure account. Procedure Obtain the RHCOS VHD cluster image: Export the URL of the RHCOS VHD to an environment variable. USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Download the compressed RHCOS VHD file locally. USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it. Upload the local VHD to the Azure Stack Hub environment, making sure that the blob is publicly available. For example, you can upload the VHD to a blob using the az cli or the web portal. 4.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.6. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications: Specify the required installation parameters. Update the platform.azure section to specify the parameters that are specific to Azure Stack Hub. Optional: Update one or more of the default configuration parameters to customize the installation. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure Stack Hub 4.6.1. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{"auths": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 7 10 12 14 17 18 20 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 6 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 8 The name of the cluster. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 11 The Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 13 The name of the resource group that contains the DNS zone for your base domain. 15 The name of your Azure Stack Hub local region. 16 The name of an existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 19 The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. 21 The pull secret required to authenticate your cluster. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 If the Azure Stack Hub environment is using an internal Certificate Authority (CA), adding the CA certificate is required. 4.7. Manually manage cloud credentials The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must specify the identity and access management (IAM) secrets for your cloud provider. Procedure If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating a cluster using the web console Updating a cluster using the CLI 4.8. Configuring the cluster to use an internal CA If the Azure Stack Hub environment is using an internal Certificate Authority (CA), update the cluster-proxy-01-config.yaml file to configure the cluster to use the internal CA. Prerequisites Create the install-config.yaml file and specify the certificate trust bundle in .pem format. Create the cluster manifests. Procedure From the directory in which the installation program creates files, go to the manifests directory. Add user-ca-bundle to the spec.trustedCA.name field. Example cluster-proxy-01-config.yaml file apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {} Optional: Back up the manifests/ cluster-proxy-01-config.yaml file. The installation program consumes the manifests/ directory when you deploy the cluster. 4.9. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 4.10. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute MachineSets : USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 4.11. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 4.11.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 4.1. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 4.2. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 4.3. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 4.4. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 4.5. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 4.6. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 4.7. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 4.8. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 4.9. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 4.10. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 4.11. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 4.12. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Note For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . 4.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.14. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.16. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console . 4.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 4.18. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')", "curl -O -L USD{COMPRESSED_VHD_URL}", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{\"auths\": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {}", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure_stack_hub/installing-azure-stack-hub-network-customizations
Chapter 34. Storage
Chapter 34. Storage /dev/disk/by-path/ now accounts for NPIV paths Previously, if two or more virtual host bus adapters (HBAs) were created on a single physical HBA, only a single link to the device was created in the /dev/disk/by-path/ directory instead of one link for each path. As a consequence, creating a virsh pool with virtual HBAs by using Fibre Channel N_Port ID Virtualization (NPIV) did not work correctly. With this update, symbolic links in /dev/disk/by-path/ are created correctly and are unique. Symbolic links in /dev/disk/by-path/ created by udev for logical unit numbers (LUNs) connected through a physical Fibre Channel N_Port stay the same. (BZ#1266934) When using thin-provisioning, buffered writes are no longer lost when the thin pool reaches capacity Previously, a resize operation, even an automated one, attempted to flush outstanding I/O to the storage device prior to the resize being performed. Since there was no room in the thin pool, the I/O operations had to be errored first to allow the grow to succeed. As a consequence, if a thin-pool was filled to capacity, some writes could be lost even if the pool was being grown at that time. With this update, buffered writes are no longer lost to the thin-pool in the described situation. (BZ# 1274676 ) RAID migration now works correctly on the little-endian variant of IBM Power Systems Previously, the raid-migrate command failed on the little-endian variant of IBM Power Systems if the stripe size was not specified, as the iprconfig utility fell back on the current stripe size of the RAID and loaded it from the adapter without performing a proper endianness conversion The underlying source code has been modified to fix this bug, and RAID migration now works correctly on the little-endian variant of IBM Power Systems. (BZ#1297921) The multipathd daemon no longer reinstates unusable Implicit ALUA ghost paths. Previously, the multipathd daemon automatically reinstated Implicit ALUA devices in the GHOST state, which were not usable. Multipath would continuously retry unusable devices, if they were the only ones present, instead of failing I/O operations. With this fix, multipathd no longer reinstates unusable Implicit ALUA ghost paths. As a result, multipath no longer continually retries I/O operations when only unusuable Implicit ALU A paths are available. (BZ# 1291406 ) Multipath now includes 0 sized standby paths in the multipath device Some arrays do not report their size on the standby ports, resulting in 0 sized devices. Previously, Multipath did not allow 0 sized devices to be added to a multipath device. As a result, Multipath did not add 0 sized standby paths to the multipath device. With this update, Multipath now allows the addition of 0 sized paths to a device. (BZ#1356651) Multipath no longers modifies devices with a dm table type of multipath that were created by other programs Previously, the multipath tools assumed that they were in charge of managing all dm devices with a multipath table. The multipathd daemon would modify the tables of devices that were not created by the multipath tools. With this update, the multipath tools now operate only on devices whose dm UUIDs start with mpath- , which is the UUID prefix that multipath uses on all the devices it creates. As a result, multipath will no longer modify devices with a dm table type of multipath that were created by other programs. (BZ#1241528) The multipathd daemon now allows paths to be added to a new multipath device if it currently has no usable paths Previously, when multipathd created a new multipath device it did not allow any more paths to be added until it saw the udev change event for the multipath device being created, even if it created the device with no usable paths. If a multipath device was created with no usable paths, the udev device manager would hang trying to get information on the device, and until it timed out no active paths could be added to the device. With this fix, multipathd now allows paths to be added to a newly created multipath device if it currently has no usable paths. As a result, usable paths are immediately added to new devices that have none, and udev does not hang. (BZ# 1350931 , BZ# 1351430 ) The multipathd daemon no longer quits on encountering recoverable errors during startup Perviously, multipathd would quit instead recovering when it hit recoverable errors during startup. With this fix, multipathd now continues if it hits a recoverable error during startup and no longer quits. (BZ#1368501) The multipathd daemon now responds to failed removes with fail rather than ok Previously, the multipathd daemon did not retain the error status when removing a path or a map failed and would respond to failed removes with ok . With this fix, multipathd now responds to failed removes with fail . (BZ# 1272620 ) Multipath no longer crashes when a uid_attribute is changed after a device is added and the device is then removed Previously, if a path changed its WWID after being added to a multipath device, the multipathd daemon would create a new device. This led to the path being in both devices. As a consequence, if users changed the uid_attribute after multipath devices were created and then removed the devices, multipathd would try to access freed memory and crash. With this fix, multipathd no longer allows the path's WWID to be changed while it is being used in a multipath device. As a result, multipathd no longer crashes in this scenario. (BZ# 1323429 ) Multipath no longer occasionally fails while renaming devices Previously, multipath was using an uninitialized variable in the function to rename a device. This would cause multipath to fail occasionally while renaming a device because the variable was set to an invalid value. With this fix, multipath now initializes this variable when renaming a device. (BZ# 1363830 ) Systemd no longer reports that the multipath.pid file is not readable Previously, systemd reported that it was unable to read the multipathd.pid file after the multipathd command returned. This was because the multipathd command was returning as soon as it forked the daemon, and the daemon was not writing the pid file until after configuration was complete. With this fix, the multipathd command now either waits until the multipathd daemon has written the pid file or 3 seconds have passed before returning, and the daemon writes the pid file earlier in startup. As a result, systemd no longer reports that the multipath.pid file is not readable. (BZ#1253913) Multipath now states that a path is not a valid argument for paths that do not belong to block devices Previously, if you used a path to something that is not a valid block device, multipath would tell you that it requires a path to check , which is unhelpful. This is because multipath considered anything that is not a block device path or major:minor number to be a multipath alias. With this fix, multipath will not treat fully qualified paths to anything that is not a block devices as a multipath alias. As a result, multipath will state the that path is not a valid argument for paths that do not belong to block devices. (BZ# 1319853 ) All /dev/mapper entries for multipath devices are now symbolic links created by udev Previously, some /dev/mapper entries for multipath devices were symbolic links (symlinks) and some were block devices since multipath was not correctly waiting for udev to create the /dev/mapper/ symlinks. With this fix, multipath now waits for udev after each transaction. As a result, all /dev/mapper entries for multipath devices are now symlinks created by udev . (BZ#1255885) New devices are now claimed by multipath as soon as multipath creates a multipath device on top of them Previously, the first time multipath saw a device, it was not claimed by multipath in the udev rules since multipath will not claim a device in udev unless the the WWID is in the /etc/multipath/wwids file when processing the uevent . With this fix, when multipath adds a new device WWID to the wwids file, it will issue a change event on the device so it can claim it in the udev rules. New devices are now claimed by multipath as soon as multipath creates a multipath device on top of them. (BZ# 1299600 ) Failures on some devices no longer keep multipath from creating other devices Previously, the multipath command could fail to set up working devices because of failures on unrelated devices since it would quit early if it failed to get the information on any of the devices it was trying to create. With this fix, multipath no longer quits early if it fails to get information on some of the devices and failures on some devices no longer keep multipath from creating others. (BZ# 1313324 ) Multipath no longer misses uevent messages and it now adds all appropriate devices Previously, multipath did not always adding all the path devices correctly because it was not correctly checking for the existence of a libudev function to compile with support for resizing the uevent socket. Because of this, multipath was not resizing the uevent socket, and it could overflow. This caused multipath to miss necessary events. With this fix, multipath now checks for the proper libudev function and compiles with support for resizing the uevent socket. As a result, multipath no longer misses uevent messages, and it now adds all appropriate devices. (BZ#1296979) The kpartx tool no longer returns before devices are created Previously, by default the kpartx tool returned without waiting for devices to be created. This was a source of confusion for users who would expect the devices to exist immediately after kpartx returned. With this update, kpartx by default now waits until the devices are created before returning. (BZ# 1299648 ) Multiple calls to resize a device will each attempt to resize the device, and will correctly report the result Previously, if multipathd failed to resize a device, it continued to think that the device had the new size. Subsequent calls to resize the device would report success and not resize the device because multipathd thought that it had nothing left to do. With this fix, multipathd now resets the device size to the original size if the resize fails. As a result, multiple calls to resize a device will each attempt to resize the device, and will correctly report the result. (BZ# 1333492 ) Multipath now correctly creates partition devices for 4k block devices with DOS partitions greater than 2TB Previously, the kpartx tool created the wrong size partitions for 4K block size devices with DOS partitions greater than 2TB. This was because kpartx stored the number of sectors and the multiplier needed to convert from the native sector size to 512B sectors in 32 bit unsigned integers. This causes a rollover if the two numbers multiplied together are larger than 2^32. with this fix, multipath now uses a 64 bit unsigned integer for the sector size multiplier variable, so the result will not roll over when the numbers are multiplied together. As a result, multipath now correctly creates the partitions. (BZ#1311463) Multipath no longer removes partitions that are in use and restores partitions when a path is added back Previously, if all paths to a device were lost, multipath would remove all the partitions that were not in use and never restore them. This occured because when multipath tried to remove a device it was removing partitions even if some of them were in use, and once they were removed it was never restoring them. With this fix, multipath now checks if any partition is in use before attempting a remove, and if the remove fails, it restores the partitions when a path is added back. (BZ#1292599) The kpartx tool no longer overwrites an existing partition device when a new device's name matches the existing one Previously, when a potential new device's name matched an existing device's name, kpartx was silently overwriting the existing partition device with the new one. This would cause kpartx devices to suddenly change where they were pointing if there was a naming clash. With this fix, kpartx now checks the UUID to make sure that it is not overwriting a partition device that belongs to a different whole device. If there is a name clash, kpartx will now fail with an error message instead of changing where an existing partition device is pointing to. (BZ#1283750) The mpathconf --allow command now creates a configuration file with the correct devices allowed for a node to boot Previously, with certain setups the mpathconf --allow command created a configuration file that did not allow the node to boot. This occurred because mpathconf --allow was removing existing entries from the blacklist_exceptions section of the configuration file, which could cause some of the allowed devices to be blacklisted. It also printed duplicate WWID entries in the blacklist_exceptions section. With this fix, mpathconf --allow no longer removes the existing blacklist_exceptions entries, and prints the WWID entries only once. This command now always creates a configuration file with the correct devices allowed for a node to boot. (BZ#1288660) Multipath devices now get correctly identified as LVM physical volumes Previously, LVM sometimes failed to recognize multipath PVs. This was because multipathd could be reloading a device at the same time that the creation uevent for it arrived. The LVM udev rules do not allow processing a device that is currently suspended, which happens during a reload. With this fix, multipathd delays device reloads until after it has received the creation uevent . (BZ#1304687) The multipathd daemon no longer prints that a path is up when it is actually down Previously, the multipathd daemon could print that a path was up when it actually was down. If multipathd detected that the path was down before it called the path checker, it never cleared the last path checker message and would print that out. With this fix, multipathd now clears the path checker message if the path is determined to be down before the checker is run. (BZ#1280524) multipathd devices no longer fail to be created if udev is processing a partition device at the same time Previously, multipathd was unable to create a multipath device when udev had a lock on the path device. This was because multipathd grabbed an exclusive lock on the path devices while creating the multipath device and udev grabs a shared lock on the path devices while processing its partition devices. With this fix multipathd now grabs a shared lock as well, so that it can run at the same time as udev . (BZ#1347769) systemd no longer prints warning messages about a missing dependency Previously, systemd printed warning messages about a missing dependency when The multipathd systemd service unit file required another unit file that was not available in the initramfs . With this fix, the multipathd unit file now uses Wants instead of Requires since it is able to operate without the blk-availability unit file. (BZ#1269293) The kpartx generated devices now have the same partition number as the actual partition number Previously, the kpartx generated device partition number did not match with the actual partition number. This was because kpartx was not counting sun partitions with no sectors when determining the partition number. With this fix, kpartx now counts sun partitions with no sectors when determining the partition number and the kpartx generated devices now have the same partition number as the actual partition number. (BZ# 1241774 ) MTX no longer fails with large tape storage arrays On systems configured with a large tape storage drive array, the MTX tool previously failed with an error. As a consequence, it was not possible to manage the tape storage. This update improves support for larger tape storage arrays, and MTX can now manage large tape storage as expected. (BZ#1298647) Interferences between dmraid and other device-mapper subsystems no longer occur Previously, the dmraid packages were compiled with an incorrect testing option. As a consequence, the dmraid tool inadvertently scanned all devices, including any other device-mapper subsystems like LVM, which could interfere with those other subsystems and cause various failures while booting. With this update, testing mode is disabled in dmraid , and all devices are not scanned at boot. As a result, interferences between dmraid and other device-mapper subsystems no longer occur. (BZ#1348289) systemd no longer warns about a missing unit for dmraid-activation.service after uninstalling dmraid Prior to this update, the /etc/systemd/system/sysinit.target.wants/dmraid-activation.service symbolic link was left on the system after uninstalling the dmraid packages, which caused the systemd service to warn about a missing unit for the dmraid-activation.service . With this update, the aforementioned symbolic link is removed when uninstalling dmraid . (BZ#1315644) mdadm no longer fails to stop an IMSM RAID array during a reshape Due to a bug, attempts to stop an Intel Matrix Storage Manager (IMSM) RAID array during a reshape previously failed. The underlying source code has been modified to fix this bug, and the mdadm utility now stops the array correctly in the described situation. (BZ#1312837) Using mdadm to assign a hot spare to a degraded array while running I/O operations no longer fails Previously, assigning a hot spare to a degraded array while running I/O operations on the MD Array could fail, and the mdadm utility returned error messages such as: A patch has been applied to fix this bug, and adding a hot spare to a degraded array now completes as expected in the described situation. (BZ# 1300579 ) A degraded RAID1 array created with mdadm is no longer shown as inactive after rebooting Previously, a degraded RAID1 array that was created using the mdadm utility could be shown as an inactive RAID0 array after rebooting the system. With this update, the array is started correctly after the system is rebooted. (BZ# 1290494 ) Trying to reshape a RAID1 array containing a bitmap to a RAID0 array no longer corrupts the RAID1 array Reshaping a RAID1 array containing a bitmap to a RAID0 array with the mdadm utility is not supported. Previously, when attempting to reshape a RAID1 array containing a bitmap to a RAID0 array, the operation was denied, but the RAID1 array was corrupted. With this update the reshape is denied, but the RAID1 array stays functional as expected. (BZ#1174622) A race condition no longer occurs with IMSM RAID arrays running an mdadm reshape operation With Intel Matrix Storage Manager (IMSM) RAID arrays running an mdadm reshape operation, a race condition could previously allow a second reshape to be launched on the same array before the first operation was completed, and the reshaping operation did not complete correctly. With this update, the race condition no longer occurs, and a second reshape operation cannot be started before the first operation is completed. (BZ# 1347762 ) mdadm can now assemble arrays that use device names over 15 characters long Previously, the mdadm utility could terminate unexpectedly with a segmentation fault when trying to assemble an array that included a device with a device name longer than 15 characters. With this update, mdadm assembles arrays correctly even when the arrays use device names longer than 15 characters. (BZ# 1347749 )
[ "mdadm: /dev/md1 has failed so using --add cannot work and might destroy mdadm: data on /dev/sdd1. You should stop the array and re-assemble it" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/bug_fixes_storage
14.11.2. Creating, Defining, and Starting Storage Pools
14.11.2. Creating, Defining, and Starting Storage Pools This section provides information about creating, defining, and starting storage pools. 14.11.2.1. Building a storage pool The pool-build pool-or-uuid --overwrite --no-overwrite command builds a pool with a specified pool name or UUID . The options --overwrite and --no-overwrite can only be used for a pool whose type is file system. If neither option is specified, and the pool is a file system type pool, then the resulting build will only make the directory. If --no-overwrite is specified, it probes to determine if a file system already exists on the target device, returning an error if it exists, or using mkfs to format the target device if it does not. If --overwrite is specified, then the mkfs command is executed and any existing data on the target device is overwritten.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-storage_pool_commands-creating_defining_and_starting_storage_pools
Chapter 2. Installing Cryostat
Chapter 2. Installing Cryostat You can install the Red Hat build of Cryostat Operator in a project on Red Hat OpenShift by using Operator Lifecycle Manager (OLM). With the Red Hat build of Cryostat Operator installed, you can create instances of Cryostat that you can access by using a web console from the Red Hat OpenShift web console. You can also download the latest Cryostat component images from the Red Hat Ecosystem Catalog. 2.1. Installing Cryostat on Red Hat OpenShift by using a Red Hat build of Cryostat Operator You can use the Operator Lifecycle Manager (OLM) to install the Red Hat build of Cryostat Operator in a project on your Red Hat OpenShift cluster. You can use the Red Hat build of Cryostat Operator to create single namespace or multi-namespace Cryostat instances. You can control these instances by using a GUI that is accessible from the Red Hat OpenShift web console. Important If you need to upgrade your Red Hat build of Cryostat Operator subscription from Cryostat 2.0 to Cryostat 2.4, you must change the update channel from stable-2.0 to stable . Prerequisites Created an OpenShift Container Platform 4.11 or later cluster. Created a Red Hat OpenShift user account with permissions to install Red Hat build of Cryostat Operator in a project. Installed Operator Lifecycle Manager (OLM) on your cluster. Installed cert-manager with the cert-manager Operator for Red Hat OpenShift. If you are using OpenShift Container Platform 4.11 or later, you can install the cert-manager Operator for Red Hat OpenShift. For more information, see cert-manager Operator for Red Hat OpenShift (OpenShift Container Platform) . Logged in to Red Hat OpenShift by using the Red Hat OpenShift web console. Procedure In your browser, navigate to Home > Projects by using the web console. Select the name of the project in which you want to install the Red Hat build of Cryostat Operator. Install the Red Hat build of Cryostat Operator: In the navigation menu of your web console, navigate to Operators > OperatorHub . Select the Red Hat build of Cryostat Operator from the list. You can use the search box in the upper part of the screen to find the Red Hat build of Cryostat Operator. To install the Red Hat build of Cryostat Operator in your project, click Install . The Red Hat OpenShift web console prompts you to create a Cryostat custom resource (CR). Note If you are installing a Cryostat instance that is enabled for multiple namespaces, in the Installation mode area, click the All namespaces on the cluster (default) radio button. You can create the CR either manually or automatically. If you want to create the CR manually, see step 4. If you want to create the CR automatically, see step 5. If you want to create the CR manually, complete the following steps: Navigate to Operators > Installed Operators by using the web console and select Red Hat build of Cryostat Operator from the list of installed operators: Figure 2.1. Viewing the Red Hat build of Cryostat operator in the list of installed operators Click the Details tab. To create a single-namespace Cryostat instance, go to the Provided APIs section. Then, under Cryostat , click Create instance . Note If you want to create a Cryostat instance that is enabled for multiple namespaces, in the Provided APIs section, select Cluster Cryostat and click Create instance . The Cluster Cryostat API has configuration options that control the deployment of the Cryostat application and its related components. For more information, see Creating Cryostat on multiple namespaces . Figure 2.2. Selecting the Cryostat API that is provided by the Red Hat build of Cryostat Operator Click either the Form view radio button or the YAML view radio button. If you want to enter your information in the YAML configuration file, click YAML view . Specify a name for the instance of Cryostat that you want to create. Optional : In the Labels field, specify a label or annotation for the Operand workload you want to deploy. You can also specify additional configuration options for your deployment: Figure 2.3. Creating an instance of Cryostat by using a form in the web console Alternatively, you can use a YAML template to create your instance and specify additional configuration options instead of using the form: Figure 2.4. Creating an instance of Cryostat by using a YAML template in the web console If you want to create the CR by using the automatic prompt option, follow the prompt's instructions and then complete the following steps: Click either the Form view radio button or the YAML view radio button. If you want to enter your information in the YAML configuration file, click YAML view . Specify a name for the instance of Cryostat that you want to create. Optional : In the Labels field, specify a label or annotation for the Operand workload you want to deploy. You can also specify additional configuration options for your deployment: Figure 2.5. Creating an instance of Cryostat by using a form in the web console Alternatively, you can use a YAML template to create your instance and specify additional configuration options instead of using the form: Figure 2.6. Creating an instance of Cryostat by using a YAML template in the web console To start the creation process for your Cryostat instance, click Create . You must wait for all resources of your Cryostat instance to be ready before you can access it. Verification In the navigation menu of the web console, click Operators , then click Installed Operators . From the table of installed operators, select Red Hat build of Cryostat Operator . Select the Cryostat tab. Your Cryostat instance opens in the table of instances and lists the following conditions: TLSSetupComplete is set to true . MainDeploymentAvailable is set to true . Optional: If you enabled the reports generator service then ReportsDeploymentAvailable is shown and set to true . Figure 2.7. Example of conditions set to True under the Status column for a Cryostat instance on OpenShift Optional: Select your Cryostat instance from the Cryostat table. Go to the Cryostat Conditions table, where you can see more information for each condition. Figure 2.8. Example of a Cryostat Conditions table that lists each condition and its criteria Steps Accessing Cryostat by using the web console 2.1.1. Accessing Cryostat by using the web console You can access and control Cryostat by using a web console that is accessible from the Red Hat OpenShift web console. Cryostat integrates with the OAuth server that is built into Red Hat OpenShift. When you attempt to access Cryostat on Red Hat OpenShift, the OAuth server directs you to the Red Hat OpenShift login page, where you can enter your Red Hat OpenShift credentials. After you enter your credentials, the OAuth server directs you to the Cryostat web console. Note If you want to access all of Cryostat's features on the OpenShift Container Platform, you must request Cryostat-specific Role-Based Access Controls (RBAC) permissions for your Red Hat OpenShift user account. See RBAC permissions . Prerequisites Created a Cryostat instance in your project. Logged in using the Red Hat OpenShift web console. Procedure On the Red Hat OpenShift web console, navigate to Installed Operators and select Red Hat build of Cryostat Operator from the list. Select the Cryostat instance that you want to access: For single-namespace Cryostat instances, click the Cryostat tab and select the Cryostat instance from the table. For multi-namespace Cryostat instances, click the Cluster Cryostat tab and select the Cluster Cryostat instance from the table. Figure 2.9. Example of selecting a single-namespace Cryostat instance under the Cryostat tab Select the application URL to access the Cryostat login screen: For single-namespace Cryostat instances, click the link in the Application URL section to access the Cryostat login screen. The OAuth server redirects you to an OpenShift Container Platform login page, so that you can obtain OAuth access tokens for authenticating to the Cryostat API. Figure 2.10. Example of selecting a link under the Application URL section For multi-namespace Cryostat instances, access the application URL in one of the following ways: In your Red Hat OpenShift command-line console (CLI), enter the following command, and replace "clustercryostatinstance-name" with the name of your multi-namespace Cryostat instance: The application URL is returned, which you can open directly from the command line or copy to a browser. Click the YAML tab and and go to the status: section. Copy the link that is available under applicationURL to your browser. Enter your credential details and then click Login . When you log in through the OAuth server for the first time, an Authorize Access page opens on your web browser. Figure 2.11. Example of an Authorize Access page that opens in a web browser Review the Requested permissions options and then select the required checkboxes. For optimal Cryostat performance, select both checkboxes. Choose one of the following options: If you want to accept the requested permissions that you selected, click the Allow selected permissions button. If you want to reject all requested permission options, click the Deny button. Your web browser redirects you to the Cryostat web console, where you can monitor Java applications that are running in a Java Virtual Machine (JVM). 2.1.2. RBAC permissions You might need to request Cryostat-specific Role-Based Access Controls (RBAC) permissions for your Red Hat OpenShift user account, so that you can access all of Cryostat's features on the OpenShift Container Platform. Note Setting RBAC permissions relates to your Red Hat OpenShift user account. Cryostat reads your Red Hat OpenShift account to determine what functionality a user can access on Cryostat. If you want to set specific permissions for a Cryostat user account, see Accessing Cryostat by using the web console . If you have limited user permissions, you can only access Cryostat features that Red Hat OpenShift authorizes you to use. If you have read-only permissions, then you can only view JDK flight recordings that were created by other users. You cannot create a new recording or delete an existing recording. You can create a custom role with Cryostat-specific RBAC permissions and then bind this role to a user's Red Hat OpenShift account. This use case is useful for when you want to set specific permissions for each user that operates within the same Cryostat namespace. Consider another use case where you want to provide users read-only access to your JFR recordings. You would create a custom role and specify get for the verbs: string of your role's pods/exec resource. Red Hat OpenShift grants permissions to users based on values specified in the apiGroups configuration string in the YAML configuration. Cryostat maps Red Hat OpenShift endpoints to target applications, so that a user that belongs to the role can perform certain tasks on target applications, such as starting a recording on a target application. The following YAML configuration demonstrates the ClusterRole with all the Cryostat-specific RBAC permissions defined: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: creationTimestamp: null name: oauth-client rules: - apiGroups: - operator.cryostat.io resources: - cryostats verbs: - create - patch - delete - get - apiGroups: - "" resources: - pods - pods/exec - services verbs: - create - patch - delete - get - apiGroups: - "" resources: - replicationcontrollers - endpoints verbs: - get - apiGroups: - apps resources: - deployments verbs: - create - get - apiGroups: - apps resources: - daemonsets - replicasets - statefulsets verbs: - get Additional resources Using RBAC to define and apply permissions (Red Hat OpenShift documentation) 2.2. Helm charts Instead of using the Red Hat build of Cryostat Operator on Red Hat OpenShift to install Cryostat, you can use a Helm chart. The Red Hat build of Cryostat Operator is the preferred way to install Cryostat, but if you require a flexible installation method that requires fewer cluster permissions, you can install Cryostat with a Helm chart. Helm is a package manager on Red Hat OpenShift that provides the following benefits: Applies regular application updates by using custom hooks. Manages the installation of complex applications. Provides charts that you can host on public or private servers. If sharing charts on a public server, ensure you're aware of the security risks. Supports rolling back to application versions. By default, Red Hat OpenShift 4.11 includes the Helm chart package manager. Before you install Cryostat with a Cryostat Helm chart, consider the following supported functions for the Cryostat Helm chart and the Red Hat build of Cryostat Operator: Function v Cryostat Helm chart Red Hat build of Cryostat Operator Access Cryostat by using Services [✓] [✓] Access Cryostat by using Routes [✓] [✓] Basic authentication [✓] OpenShift OAuth authentication [✓] End-to-end encryption [✓] Grafana integration [✓] [✓] Persistent storage [✓] [✓] Sidecar report generator [✓] The table shows that the Cryostat Helm chart does not support the same level of functionality as the Red Hat build of Cryostat Operator. Additional resources Overview Red Hat build of Cryostat Operator (Using the Red Hat build of Cryostat Operator to configure Cryostat) 2.2.1. Installing Cryostat by using a Helm chart By default, Red Hat OpenShift 4.11 includes the Helm chart package manager. You can use this package manager to install a Cryostat Helm chart on Red Hat OpenShift. In turn, you can use this Helm chart to install a Cryostat instance on Red Hat OpenShift. After you install the Cryostat Helm chart, the Helm chart creates the following objects: Deployment , which contains Cryostat, Grafana, and a data source for Grafana. Routes that exposes the Cryostat and Grafana services outside a Red Hat OpenShift cluster. This object is enabled by default on Red Hat OpenShift. Services for Cryostat and Grafana. Service Account , Role , and Role Binding for Cryostat, so that Cryostat Helm chart can use these objects to discover your applications. Prerequisites Logged in to the OpenShift Container Platform by using the Red Hat OpenShift web console. Configured appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. Procedure Switch to Developer mode on your Red Hat OpenShift web console. Click the +Add menu. From the Developer Catalog panel, click Helm Chart . Click the Cryostat tile. A window displays on your Red Hat OpenShift web console. Tip To quickly find the Cryostat tile, enter Cryostat in the search field. Click Install Helm Chart . From the Install Helm Chart window, complete the following actions: In the Release name field, enter a name for your Cryostat Helm chart. From the Chart version drop-down list, ensure a version of Cryostat is selected. Optional: From Form view , click Chart Values , and then configure options for your Cryostat Helm chart. Optional: To access more configuration options, switch to the YAML View and then edit the parameters to meet your needs. Figure 2.12. OpenShift Install Helm Chart window Click Install . A window with tabs might open in your web console where you can view information for the Cryostat Helm chart. From the Release notes tab, you can view post-installation steps that you must perform. To perform these steps, you must use the oc CLI for your Red Hat OpenShift cluster. By default, Cryostat Helm chart uses Routes for networking. If you have disabled Routes , the instructions might differ depending on the kind of networking that you selected. Important If you set core.route.enabled or grafana.route.enabled to false for your Cryostat Helm chart, which disables the Routes resource, port-forwarding oc instructions display in the web console. Optional: From the topology window, click a pod icon and then go to either the Details tab or the Resources tab to view more information about the pod. Tip If you need to quickly find a pod, consider using the filter toolbar, where you can display options, filter by resource, or enter a name of a pod. When you completed the post-installation steps that are outlined on the Release notes tab, you can use Cryostat with your applications. Figure 2.13. OpenShift pod topology window Verification In the same terminal where you completed the post-installation steps, go to the "Visit the Cryostat application at ... " step to view the URL with which you can access the Cryostat application. Note The URL to access the Cryostat application URL varies depending on the configuration parameters that you chose. Additional resources Helm (The Helm project) cryostat-helm (GitHub) Viewing application composition using the Topology view (OpenShift Container Platform) Revised on 2023-12-12 18:52:21 UTC
[ "get clustercryostat clustercryostatinstance-name", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: creationTimestamp: null name: oauth-client rules: - apiGroups: - operator.cryostat.io resources: - cryostats verbs: - create - patch - delete - get - apiGroups: - \"\" resources: - pods - pods/exec - services verbs: - create - patch - delete - get - apiGroups: - \"\" resources: - replicationcontrollers - endpoints verbs: - get - apiGroups: - apps resources: - deployments verbs: - create - get - apiGroups: - apps resources: - daemonsets - replicasets - statefulsets verbs: - get" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/installing_cryostat/assembly_installing-cryostat_cryostat
Chapter 7. Known issues
Chapter 7. Known issues Known issues in Red Hat 3scale API Management 2.15: Table 7.1. Known issues Issue number Description THREESCALE-7514 Nginx filter policy is not working as expected when using content-caching THREESCALE-9033 Some special characters in Application Keys are not supported THREESCALE-10761 Problem with special chars in application_id in utilization THREESCALE-10762 Command from Configuration page cause apicast to return 403 THREESCALE-10763 Command from Configuration page cause apicast to return 403 THREESCALE-10875 DeveloperAccount backup and restore needs to be hardened THREESCALE-11532 Backend clients cannot re-establish the connection Issue resolved in 2.15.2
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/release_notes_for_red_hat_3scale_api_management_2.15_on-premises/known_issues
8.11. Installation Source
8.11. Installation Source To specify a file or a location to install Red Hat Enterprise Linux from, select Installation Source from the Installation Summary screen. On this screen, you can choose between locally available installation media, such as a DVD or an ISO file, or a network location. Figure 8.9. Installation Source Screen Select one of the following options: Auto-detected installation media If you started the installation using the full installation DVD or USB drive, the installation program will detect it and display basic information under this option. Click the Verify button to ensure that the media is suitable for installation. This integrity test is the same as the one performed if you selected Test this media & Install Red Hat Enterprise Linux in the boot menu, or if you used the rd.live.check boot option. ISO file This option will appear if the installation program detected a partitioned hard drive with mountable file systems. Select this option, click the Choose an ISO button, and browse to the installation ISO file's location on your system. Then click Verify to ensure that the file is suitable for installation. On the network To specify a network location, select this option and choose from the following options in the drop-down menu: http:// https:// ftp:// nfs Using your selection as the start of the location URL, type the rest into the address box. If you choose NFS, another box will appear for you to specify any NFS mount options. Important When selecting an NFS-based installation source, you must specify the address with a colon ( : ) character separating the host name from the path. For example: To configure a proxy for an HTTP or HTTPS source, click the Proxy setup button. Check Enable HTTP proxy and type the URL into the Proxy URL box. If your proxy requires authentication, check Use Authentication and enter a user name and password. Click Add . If your HTTP or HTTPS URL refers to a repository mirror list, mark the check box under the input field. You can also specify additional repositories to gain access to more installation environments and software add-ons. See Section 8.13, "Software Selection" for more information. To add a repository, click the + button. To delete a repository, click the - button. Click the arrow icon to revert to the list of repositories, that is, to replace current entries with those that were present at the time you entered the Installation Source screen. To activate or deactivate a repository, click the check box in the Enabled column at each entry in the list. In the right part of the form, you can name your additional repository and configure it the same way as the primary repository on the network. Once you have selected your installation source, click Done to return to the Installation Summary screen.
[ "server.example.com : /path/to/directory" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-source-x86
Migration guide Camel K to Camel Extensions for Quarkus
Migration guide Camel K to Camel Extensions for Quarkus Red Hat build of Apache Camel K 1.10.7 Migrating from Camel K to Camel Extensions for Quarkus
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/migration_guide_camel_k_to_camel_extensions_for_quarkus/index
Release notes
Release notes OpenShift sandboxed containers 1.8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.8/html/release_notes/index
Chapter 73. KafkaClientAuthenticationTls schema reference
Chapter 73. KafkaClientAuthenticationTls schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationTls schema properties To configure mTLS authentication, set the type property to the value tls . mTLS uses a TLS certificate to authenticate. 73.1. certificateAndKey The certificate is specified in the certificateAndKey property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private. You can use the secrets created by the User Operator, or you can create your own TLS certificate file, with the keys used for authentication, then create a Secret from the file: oc create secret generic MY-SECRET \ --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt \ --from-file= MY-PRIVATE.key Note mTLS authentication can only be used with TLS connections. Example mTLS configuration authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key 73.2. KafkaClientAuthenticationTls schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationTls type from KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth . It must have the value tls for the type KafkaClientAuthenticationTls . Property Property type Description certificateAndKey CertAndKeySecretSource Reference to the Secret which holds the certificate and private key pair. type string Must be tls .
[ "create secret generic MY-SECRET --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt --from-file= MY-PRIVATE.key", "authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaClientAuthenticationTls-reference
Composing a customized RHEL system image
Composing a customized RHEL system image Red Hat Enterprise Linux 9 Creating customized system images with RHEL image builder on Red Hat Enterprise Linux 9 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_a_customized_rhel_system_image/index
Installing on a single node
Installing on a single node OpenShift Container Platform 4.18 Installing OpenShift Container Platform on a single node Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_a_single_node/index
Chapter 4. Creating images
Chapter 4. Creating images Learn how to create your own container images, based on pre-built images that are ready to help you. The process includes learning best practices for writing images, defining metadata for images, testing images, and using a custom builder workflow to create images to use with OpenShift Container Platform. After you create an image, you can push it to the internal registry. 4.1. Learning container best practices When creating container images to run on OpenShift Container Platform there are a number of best practices to consider as an image author to ensure a good experience for consumers of those images. Because images are intended to be immutable and used as-is, the following guidelines help ensure that your images are highly consumable and easy to use on OpenShift Container Platform. 4.1.1. General container image guidelines The following guidelines apply when creating a container image in general, and are independent of whether the images are used on OpenShift Container Platform. Reuse images Wherever possible, base your image on an appropriate upstream image using the FROM statement. This ensures your image can easily pick up security fixes from an upstream image when it is updated, rather than you having to update your dependencies directly. In addition, use tags in the FROM instruction, for example, rhel:rhel7 , to make it clear to users exactly which version of an image your image is based on. Using a tag other than latest ensures your image is not subjected to breaking changes that might go into the latest version of an upstream image. Maintain compatibility within tags When tagging your own images, try to maintain backwards compatibility within a tag. For example, if you provide an image named foo and it currently includes version 1.0 , you might provide a tag of foo:v1 . When you update the image, as long as it continues to be compatible with the original image, you can continue to tag the new image foo:v1 , and downstream consumers of this tag are able to get updates without being broken. If you later release an incompatible update, then switch to a new tag, for example foo:v2 . This allows downstream consumers to move up to the new version at will, but not be inadvertently broken by the new incompatible image. Any downstream consumer using foo:latest takes on the risk of any incompatible changes being introduced. Avoid multiple processes Do not start multiple services, such as a database and SSHD , inside one container. This is not necessary because containers are lightweight and can be easily linked together for orchestrating multiple processes. OpenShift Container Platform allows you to easily colocate and co-manage related images by grouping them into a single pod. This colocation ensures the containers share a network namespace and storage for communication. Updates are also less disruptive as each image can be updated less frequently and independently. Signal handling flows are also clearer with a single process as you do not have to manage routing signals to spawned processes. Use exec in wrapper scripts Many images use wrapper scripts to do some setup before starting a process for the software being run. If your image uses such a script, that script uses exec so that the script's process is replaced by your software. If you do not use exec , then signals sent by your container runtime go to your wrapper script instead of your software's process. This is not what you want. If you have a wrapper script that starts a process for some server. You start your container, for example, using podman run -i , which runs the wrapper script, which in turn starts your process. If you want to close your container with CTRL+C . If your wrapper script used exec to start the server process, podman sends SIGINT to the server process, and everything works as you expect. If you did not use exec in your wrapper script, podman sends SIGINT to the process for the wrapper script and your process keeps running like nothing happened. Also note that your process runs as PID 1 when running in a container. This means that if your main process terminates, the entire container is stopped, canceling any child processes you launched from your PID 1 process. Clean temporary files Remove all temporary files you create during the build process. This also includes any files added with the ADD command. For example, run the yum clean command after performing yum install operations. You can prevent the yum cache from ending up in an image layer by creating your RUN statement as follows: RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y Note that if you instead write: RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y Then the first yum invocation leaves extra files in that layer, and these files cannot be removed when the yum clean operation is run later. The extra files are not visible in the final image, but they are present in the underlying layers. The current container build process does not allow a command run in a later layer to shrink the space used by the image when something was removed in an earlier layer. However, this may change in the future. This means that if you perform an rm command in a later layer, although the files are hidden it does not reduce the overall size of the image to be downloaded. Therefore, as with the yum clean example, it is best to remove files in the same command that created them, where possible, so they do not end up written to a layer. In addition, performing multiple commands in a single RUN statement reduces the number of layers in your image, which improves download and extraction time. Place instructions in the proper order The container builder reads the Dockerfile and runs the instructions from top to bottom. Every instruction that is successfully executed creates a layer which can be reused the time this or another image is built. It is very important to place instructions that rarely change at the top of your Dockerfile . Doing so ensures the builds of the same image are very fast because the cache is not invalidated by upper layer changes. For example, if you are working on a Dockerfile that contains an ADD command to install a file you are iterating on, and a RUN command to yum install a package, it is best to put the ADD command last: FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile This way each time you edit myfile and rerun podman build or docker build , the system reuses the cached layer for the yum command and only generates the new layer for the ADD operation. If instead you wrote the Dockerfile as: FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y Then each time you changed myfile and reran podman build or docker build , the ADD operation would invalidate the RUN layer cache, so the yum operation must be rerun as well. Mark important ports The EXPOSE instruction makes a port in the container available to the host system and other containers. While it is possible to specify that a port should be exposed with a podman run invocation, using the EXPOSE instruction in a Dockerfile makes it easier for both humans and software to use your image by explicitly declaring the ports your software needs to run: Exposed ports show up under podman ps associated with containers created from your image. Exposed ports are present in the metadata for your image returned by podman inspect . Exposed ports are linked when you link one container to another. Set environment variables It is good practice to set environment variables with the ENV instruction. One example is to set the version of your project. This makes it easy for people to find the version without looking at the Dockerfile . Another example is advertising a path on the system that could be used by another process, such as JAVA_HOME . Avoid default passwords Avoid setting default passwords. Many people extend the image and forget to remove or change the default password. This can lead to security issues if a user in production is assigned a well-known password. Passwords are configurable using an environment variable instead. If you do choose to set a default password, ensure that an appropriate warning message is displayed when the container is started. The message should inform the user of the value of the default password and explain how to change it, such as what environment variable to set. Avoid sshd It is best to avoid running sshd in your image. You can use the podman exec or docker exec command to access containers that are running on the local host. Alternatively, you can use the oc exec command or the oc rsh command to access containers that are running on the OpenShift Container Platform cluster. Installing and running sshd in your image opens up additional vectors for attack and requirements for security patching. Use volumes for persistent data Images use a volume for persistent data. This way OpenShift Container Platform mounts the network storage to the node running the container, and if the container moves to a new node the storage is reattached to that node. By using the volume for all persistent storage needs, the content is preserved even if the container is restarted or moved. If your image writes data to arbitrary locations within the container, that content could not be preserved. All data that needs to be preserved even after the container is destroyed must be written to a volume. Container engines support a readonly flag for containers, which can be used to strictly enforce good practices about not writing data to ephemeral storage in a container. Designing your image around that capability now makes it easier to take advantage of it later. Explicitly defining volumes in your Dockerfile makes it easy for consumers of the image to understand what volumes they must define when running your image. See the Kubernetes documentation for more information on how volumes are used in OpenShift Container Platform. Note Even with persistent volumes, each instance of your image has its own volume, and the filesystem is not shared between instances. This means the volume cannot be used to share state in a cluster. 4.1.2. OpenShift Container Platform-specific guidelines The following are guidelines that apply when creating container images specifically for use on OpenShift Container Platform. 4.1.2.1. Enable images for source-to-image (S2I) For images that are intended to run application code provided by a third party, such as a Ruby image designed to run Ruby code provided by a developer, you can enable your image to work with the Source-to-Image (S2I) build tool. S2I is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. 4.1.2.2. Support arbitrary user ids By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID. This provides additional security against processes escaping the container due to a container engine vulnerability and thereby achieving escalated permissions on the host node. For an image to support running as an arbitrary user, directories and files that are written to by processes in the image must be owned by the root group and be read/writable by that group. Files to be executed must also have group execute permissions. Adding the following to your Dockerfile sets the directory and file permissions to allow users in the root group to access them in the built image: RUN chgrp -R 0 /some/directory && \ chmod -R g=u /some/directory Because the container user is always a member of the root group, the container user can read and write these files. Warning Care must be taken when altering the directories and file permissions of sensitive areas of a container, which is no different than to a normal system. If applied to sensitive areas, such as /etc/passwd , this can allow the modification of such files by unintended users potentially exposing the container or host. CRI-O supports the insertion of arbitrary user IDs into the container's /etc/passwd , so changing permissions is never required. In addition, the processes running in the container must not listen on privileged ports, ports below 1024, since they are not running as a privileged user. Important If your S2I image does not include a USER declaration with a numeric user, your builds fail by default. To allow images that use either named users or the root 0 user to build in OpenShift Container Platform, you can add the project's builder service account, system:serviceaccount:<your-project>:builder , to the anyuid security context constraint (SCC). Alternatively, you can allow all images to run as any user. 4.1.2.3. Use services for inter-image communication For cases where your image needs to communicate with a service provided by another image, such as a web front end image that needs to access a database image to store and retrieve data, your image consumes an OpenShift Container Platform service. Services provide a static endpoint for access which does not change as containers are stopped, started, or moved. In addition, services provide load balancing for requests. 4.1.2.4. Provide common libraries For images that are intended to run application code provided by a third party, ensure that your image contains commonly used libraries for your platform. In particular, provide database drivers for common databases used with your platform. For example, provide JDBC drivers for MySQL and PostgreSQL if you are creating a Java framework image. Doing so prevents the need for common dependencies to be downloaded during application assembly time, speeding up application image builds. It also simplifies the work required by application developers to ensure all of their dependencies are met. 4.1.2.5. Use environment variables for configuration Users of your image are able to configure it without having to create a downstream image based on your image. This means that the runtime configuration is handled using environment variables. For a simple configuration, the running process can consume the environment variables directly. For a more complicated configuration or for runtimes which do not support this, configure the runtime by defining a template configuration file that is processed during startup. During this processing, values supplied using environment variables can be substituted into the configuration file or used to make decisions about what options to set in the configuration file. It is also possible and recommended to pass secrets such as certificates and keys into the container using environment variables. This ensures that the secret values do not end up committed in an image and leaked into a container image registry. Providing environment variables allows consumers of your image to customize behavior, such as database settings, passwords, and performance tuning, without having to introduce a new layer on top of your image. Instead, they can simply define environment variable values when defining a pod and change those settings without rebuilding the image. For extremely complex scenarios, configuration can also be supplied using volumes that would be mounted into the container at runtime. However, if you elect to do it this way you must ensure that your image provides clear error messages on startup when the necessary volume or configuration is not present. This topic is related to the Using Services for Inter-image Communication topic in that configuration like datasources are defined in terms of environment variables that provide the service endpoint information. This allows an application to dynamically consume a datasource service that is defined in the OpenShift Container Platform environment without modifying the application image. In addition, tuning is done by inspecting the cgroups settings for the container. This allows the image to tune itself to the available memory, CPU, and other resources. For example, Java-based images tune their heap based on the cgroup maximum memory parameter to ensure they do not exceed the limits and get an out-of-memory error. 4.1.2.6. Set image metadata Defining image metadata helps OpenShift Container Platform better consume your container images, allowing OpenShift Container Platform to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that are needed. 4.1.2.7. Clustering You must fully understand what it means to run multiple instances of your image. In the simplest case, the load balancing function of a service handles routing traffic to all instances of your image. However, many frameworks must share information to perform leader election or failover state; for example, in session replication. Consider how your instances accomplish this communication when running in OpenShift Container Platform. Although pods can communicate directly with each other, their IP addresses change anytime the pod starts, stops, or is moved. Therefore, it is important for your clustering scheme to be dynamic. 4.1.2.8. Logging It is best to send all logging to standard out. OpenShift Container Platform collects standard out from containers and sends it to the centralized logging service where it can be viewed. If you must separate log content, prefix the output with an appropriate keyword, which makes it possible to filter the messages. If your image logs to a file, users must use manual operations to enter the running container and retrieve or view the log file. 4.1.2.9. Liveness and readiness probes Document example liveness and readiness probes that can be used with your image. These probes allow users to deploy your image with confidence that traffic is not be routed to the container until it is prepared to handle it, and that the container is restarted if the process gets into an unhealthy state. 4.1.2.10. Templates Consider providing an example template with your image. A template gives users an easy way to quickly get your image deployed with a working configuration. Your template must include the liveness and readiness probes you documented with the image, for completeness. 4.2. Including metadata in images Defining image metadata helps OpenShift Container Platform better consume your container images, allowing OpenShift Container Platform to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that may also be needed. This topic only defines the metadata needed by the current set of use cases. Additional metadata or use cases may be added in the future. 4.2.1. Defining image metadata You can use the LABEL instruction in a Dockerfile to define image metadata. Labels are similar to environment variables in that they are key value pairs attached to an image or a container. Labels are different from environment variable in that they are not visible to the running application and they can also be used for fast look-up of images and containers. Docker documentation for more information on the LABEL instruction. The label names are typically namespaced. The namespace is set accordingly to reflect the project that is going to pick up the labels and use them. For OpenShift Container Platform the namespace is set to io.openshift and for Kubernetes the namespace is io.k8s . See the Docker custom metadata documentation for details about the format. Table 4.1. Supported Metadata Variable Description io.openshift.tags This label contains a list of tags represented as a list of comma-separated string values. The tags are the way to categorize the container images into broad areas of functionality. Tags help UI and generation tools to suggest relevant container images during the application creation process. io.openshift.wants Specifies a list of tags that the generation tools and the UI uses to provide relevant suggestions if you do not have the container images with specified tags already. For example, if the container image wants mysql and redis and you do not have the container image with redis tag, then UI can suggest you to add this image into your deployment. io.k8s.description This label can be used to give the container image consumers more detailed information about the service or functionality this image provides. The UI can then use this description together with the container image name to provide more human friendly information to end users. io.openshift.non-scalable An image can use this variable to suggest that it does not support scaling. The UI then communicates this to consumers of that image. Being not-scalable means that the value of replicas should initially not be set higher than 1 . io.openshift.min-memory and io.openshift.min-cpu This label suggests how much resources the container image needs to work properly. The UI can warn the user that deploying this container image may exceed their user quota. The values must be compatible with Kubernetes quantity. 4.3. Creating images from source code with source-to-image Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts. 4.3.1. Understanding the source-to-image build process The build process consists of the following three fundamental elements, which are combined into a final container image: Sources Source-to-image (S2I) scripts Builder image S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah. 4.3.2. How to write source-to-image scripts You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble / run / save-artifacts scripts. All of these locations are checked on each build in the following order: A script specified in the build configuration. A script found in the application source .s2i/bin directory. A script found at the default image URL with the io.openshift.s2i.scripts-url label. Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms: image:///path_to_scripts_dir : absolute path inside the image to a directory where the S2I scripts are located. file:///path_to_scripts_dir : relative or absolute path to a directory on the host where the S2I scripts are located. http(s)://path_to_scripts_dir : URL to a directory where the S2I scripts are located. Table 4.2. S2I scripts Script Description assemble The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is: Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well. Place the application source in the desired location. Build the application artifacts. Install the artifacts into locations appropriate for them to run. run The run script executes your application. This script is required. save-artifacts The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example: For Ruby, gems installed by Bundler. For Java, .m2 contents. These dependencies are gathered into a tar file and streamed to the standard output. usage The usage script allows you to inform the user how to properly use your image. This script is optional. test/run The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is: Build the image. Run the image to verify the usage script. Run s2i build to verify the assemble script. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality. Run the image to verify the test application is working. Note The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository. Example S2I scripts The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory. assemble script: #!/bin/bash # restore build artifacts if [ "USD(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi # move the application source mv /tmp/s2i/src USDHOME/src # build application artifacts pushd USD{HOME} make all # install the artifacts make install popd run script: #!/bin/bash # run the application /opt/application/run.sh save-artifacts script: #!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd usage script: #!/bin/bash # inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF Additional resources S2I Image Creation Tutorial 4.4. About testing source-to-image images As an Source-to-Image (S2I) builder image author, you can test your S2I image locally and use the OpenShift Container Platform build system for automated testing and continuous integration. S2I requires the assemble and run scripts to be present to successfully run the S2I build. Providing the save-artifacts script reuses the build artifacts, and providing the usage script ensures that usage information is printed to console when someone runs the container image outside of the S2I. The goal of testing an S2I image is to make sure that all of these described commands work properly, even if the base container image has changed or the tooling used by the commands was updated. 4.4.1. Understanding testing requirements The standard location for the test script is test/run . This script is invoked by the OpenShift Container Platform S2I image builder and it could be a simple Bash script or a static Go binary. The test/run script performs the S2I build, so you must have the S2I binary available in your USDPATH . If required, follow the installation instructions in the S2I README . S2I combines the application source code and builder image, so to test it you need a sample application source to verify that the source successfully transforms into a runnable container image. The sample application should be simple, but it should exercise the crucial steps of assemble and run scripts. 4.4.2. Generating scripts and tools The S2I tooling comes with powerful generation tools to speed up the process of creating a new S2I image. The s2i create command produces all the necessary S2I scripts and testing tools along with the Makefile : USD s2i create _<image name>_ _<destination directory>_ The generated test/run script must be adjusted to be useful, but it provides a good starting point to begin developing. Note The test/run script produced by the s2i create command requires that the sample application sources are inside the test/test-app directory. 4.4.3. Testing locally The easiest way to run the S2I image tests locally is to use the generated Makefile . If you did not use the s2i create command, you can copy the following Makefile template and replace the IMAGE_NAME parameter with your image name. Sample Makefile 4.4.4. Basic testing workflow The test script assumes you have already built the image you want to test. If required, first build the S2I image. Run one of the following commands: If you use Podman, run the following command: USD podman build -t <builder_image_name> If you use Docker, run the following command: USD docker build -t <builder_image_name> The following steps describe the default workflow to test S2I image builders: Verify the usage script is working: If you use Podman, run the following command: USD podman run <builder_image_name> . If you use Docker, run the following command: USD docker run <builder_image_name> . Build the image: USD s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_ Optional: if you support save-artifacts , run step 2 once again to verify that saving and restoring artifacts works properly. Run the container: If you use Podman, run the following command: USD podman run <output_application_image_name> If you use Docker, run the following command: USD docker run <output_application_image_name> Verify the container is running and the application is responding. Running these steps is generally enough to tell if the builder image is working as expected. 4.4.5. Using OpenShift Container Platform for building the image Once you have a Dockerfile and the other artifacts that make up your new S2I builder image, you can put them in a git repository and use OpenShift Container Platform to build and push the image. Define a Docker build that points to your repository. If your OpenShift Container Platform instance is hosted on a public IP address, the build can be triggered each time you push into your S2I builder image GitHub repository. You can also use the ImageChangeTrigger to trigger a rebuild of your applications that are based on the S2I builder image you updated.
[ "RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y", "RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y", "FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile", "FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y", "RUN chgrp -R 0 /some/directory && chmod -R g=u /some/directory", "LABEL io.openshift.tags mongodb,mongodb24,nosql", "LABEL io.openshift.wants mongodb,redis", "LABEL io.k8s.description The MySQL 5.5 Server with master-slave replication support", "LABEL io.openshift.non-scalable true", "LABEL io.openshift.min-memory 16Gi LABEL io.openshift.min-cpu 4", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "s2i create _<image name>_ _<destination directory>_", "IMAGE_NAME = openshift/ruby-20-centos7 CONTAINER_ENGINE := USD(shell command -v podman 2> /dev/null | echo docker) build: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME) . .PHONY: test test: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME)-candidate . IMAGE_NAME=USD(IMAGE_NAME)-candidate test/run", "podman build -t <builder_image_name>", "docker build -t <builder_image_name>", "podman run <builder_image_name> .", "docker run <builder_image_name> .", "s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_", "podman run <output_application_image_name>", "docker run <output_application_image_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/images/creating-images
10.8. Ports Used by Red Hat JBoss Data Virtualization
10.8. Ports Used by Red Hat JBoss Data Virtualization Red Hat JBoss Data Virtualization inherits the ports used by JBoss Enterprise Application Platform. You can find a full list here: https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6/html/Administration_and_Configuration_Guide/Network_Ports_Used_By_JBoss_Enterprise_Application_Platform_62.html In addition, to these, ports 31000 and 35432 must also be kept open.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/ports_used_by_red_hat_jboss_data_virtualization
10.11. Unusual Failover Behavior
10.11. Unusual Failover Behavior A common problem with cluster servers is unusual failover behavior. Services will stop when other services start or services will refuse to start on failover. This can be due to having complex systems of failover consisting of failover domains, service dependency, and service exclusivity. Try scaling back to a simpler service or failover domain configuration and see if the issue persists. Avoid features like service exclusivity and dependency unless you fully understand how those features may effect failover under all conditions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-badfailoverbehavior-ca
Chapter 7. Assigning permissions using roles and groups
Chapter 7. Assigning permissions using roles and groups Roles and groups have a similar purpose, which is to give users access and permissions to use applications. Groups are a collection of users to which you apply roles and attributes. Roles define specific applications permissions and access control. A role typically applies to one type of user. For example, an organization may include admin , user , manager , and employee roles. An application can assign access and permissions to a role and then assign multiple users to that role so the users have the same access and permissions. For example, the Admin Console has roles that give permission to users to access different parts of the Admin Console. There is a global namespace for roles and each client also has its own dedicated namespace where roles can be defined. 7.1. Creating a realm role Realm-level roles are a namespace for defining your roles. To see the list of roles, click Realm Roles in the menu. Procedure Click Create Role . Enter a Role Name . Enter a Description . Click Save . Add role The description field can be localized by specifying a substitution variable with USD{var-name} strings. The localized value is configured to your theme within the themes property files. See the Server Developer Guide for more details. 7.2. Client roles Client roles are namespaces dedicated to clients. Each client gets its own namespace. Client roles are managed under the Roles tab for each client. You interact with this UI the same way you do for realm-level roles. 7.3. Converting a role to a composite role Any realm or client level role can become a composite role . A composite role is a role that has one or more additional roles associated with it. When a composite role is mapped to a user, the user gains the roles associated with the composite role. This inheritance is recursive so users also inherit any composite of composites. However, we recommend that composite roles are not overused. Procedure Click Realm Roles in the menu. Click the role that you want to convert. From the Action list, select Add associated roles . Composite role The role selection UI is displayed on the page and you can associate realm level and client level roles to the composite role you are creating. In this example, the employee realm-level role is associated with the developer composite role. Any user with the developer role also inherits the employee role. Note When creating tokens and SAML assertions, any composite also has its associated roles added to the claims and assertions of the authentication response sent back to the client. 7.4. Assigning role mappings You can assign role mappings to a user through the Role Mappings tab for that user. Procedure Click Users in the menu. Click the user that you want to perform a role mapping on. Click the Role mappings tab. Click Assign role . Select the role you want to assign to the user from the dialog. Click Assign . Role mappings In the preceding example, we are assigning the composite role developer to a user. That role was created in the Composite Roles topic. Effective role mappings When the developer role is assigned, the employee role associated with the developer composite is displayed with Inherited "True". Inherited roles are the roles explicitly assigned to users and roles that are inherited from composites. 7.5. Using default roles Use default roles to automatically assign user role mappings when a user is created or imported through Identity Brokering . Procedure Click Realm settings in the menu. Click the User registration tab. Default roles This screenshot shows that some default roles already exist. 7.6. Role scope mappings On creation of an OIDC access token or SAML assertion, the user role mappings become claims within the token or assertion. Applications use these claims to make access decisions on the resources controlled by the application. Red Hat build of Keycloak digitally signs access tokens and applications re-use them to invoke remotely secured REST services. However, these tokens have an associated risk. An attacker can obtain these tokens and use their permissions to compromise your networks. To prevent this situation, use Role Scope Mappings . Role Scope Mappings limit the roles declared inside an access token. When a client requests a user authentication, the access token they receive contains only the role mappings that are explicitly specified for the client's scope. The result is that you limit the permissions of each individual access token instead of giving the client access to all the users permissions. By default, each client gets all the role mappings of the user. You can view the role mappings for a client. Procedure Click Clients in the menu. Click the client to go to the details. Click the Client scopes tab. Click the link in the row with Dedicated scope and mappers for this client Click the Scope tab. Full scope By default, the effective roles of scopes are every declared role in the realm. To change this default behavior, toggle Full Scope Allowed to OFF and declare the specific roles you want in each client. You can also use client scopes to define the same role scope mappings for a set of clients. Partial scope 7.7. Groups Groups in Red Hat build of Keycloak manage a common set of attributes and role mappings for each user. Users can be members of any number of groups and inherit the attributes and role mappings assigned to each group. To manage groups, click Groups in the menu. Groups Groups are hierarchical. A group can have multiple subgroups but a group can have only one parent. Subgroups inherit the attributes and role mappings from their parent. Users inherit the attributes and role mappings from their parent as well. If you have a parent group and a child group, and a user that belongs only to the child group, the user in the child group inherits the attributes and role mappings of both the parent group and the child group. The following example includes a top-level Sales group and a child North America subgroup. To add a group: Click the group. Click Create group . Enter a group name. Click Create . Click the group name. The group management page is displayed. Group Attributes and role mappings you define are inherited by the groups and users that are members of the group. To add a user to a group: Click Users in the menu. Click the user that you want to perform a role mapping on. If the user is not displayed, click View all users . Click Groups . User groups Click Join Group . Select a group from the dialog. Select a group from the Available Groups tree. Click Join . To remove a group from a user: Click Users in the menu. Click the user to be removed from the group. Click Leave on the group table row. In this example, the user jimlincoln is in the North America group. You can see jimlincoln displayed under the Members tab for the group. Group membership 7.7.1. Groups compared to roles Groups and roles have some similarities and differences. In Red Hat build of Keycloak, groups are a collection of users to which you apply roles and attributes. Roles define types of users and applications assign permissions and access control to roles. Composite Roles are similar to Groups as they provide the same functionality. The difference between them is conceptual. Composite roles apply the permission model to a set of services and applications. Use composite roles to manage applications and services. Groups focus on collections of users and their roles in an organization. Use groups to manage users. 7.7.2. Using default groups To automatically assign group membership to any users who is created or who is imported through Identity Brokering , you use default groups. Click Realm settings in the menu. Click the User registration tab. Click the Default Groups tab. Default groups This screenshot shows that some default groups already exist.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_administration_guide/assigning_permissions_using_roles_and_groups
Chapter 6. Uninstalling OpenShift Data Foundation
Chapter 6. Uninstalling OpenShift Data Foundation 6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_power/uninstalling_openshift_data_foundation
Chapter 6. Installing a cluster on GCP in a disconnected environment
Chapter 6. Installing a cluster on GCP in a disconnected environment In OpenShift Container Platform 4.18, you can install a cluster on Google Cloud Platform (GCP) in a restricted network by creating an internal mirror of the installation release content on an existing Google Virtual Private Cloud (VPC). Important You can install an OpenShift Container Platform cluster by using mirrored installation release content, but your cluster will require internet access to use the GCP APIs. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in GCP. While installing a cluster in a restricted network that uses installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to *.googleapis.com and accounts.google.com . 6.2. About installations in restricted networks In OpenShift Container Platform 4.18, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 6.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 6.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.gcp field: network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet> For platform.gcp.network , specify the name for the existing Google VPC. For platform.gcp.controlPlaneSubnet and platform.gcp.computeSubnet , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 6.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.5.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Note Not all instance types are available in all regions and zones. For a detailed breakdown of which instance types are available in which zones, see regions and zones (Google documentation). Some instance types require the use of Hyperdisk storage. If you use an instance type that requires Hyperdisk storage, all of the nodes in your cluster must support Hyperdisk storage, and you must change the default storage class to use Hyperdisk storage. For more information, see machine series support for Hyperdisk (Google documentation). For instructions on modifying storage classes, see the "GCE PersistentDisk (gcePD) object definition" section in the Dynamic Provisioning page in Storage . Example 6.1. Machine series A2 A3 C2 C2D C3 C3D C4 E2 M1 N1 N2 N2D N4 Tau T2D 6.5.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 6.2. Machine series for 64-bit ARM machines C4A Tau T2A 6.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 6.5.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 6.5.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 6.5.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 additionalTrustBundle: | 27 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 28 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 15 17 18 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 21 Specify the name of an existing VPC. 22 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 23 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 24 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 25 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 26 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 27 Provide the contents of the certificate file that you used for your mirror registry. 28 Provide the imageContentSources section from the output of the command to mirror the repository. 6.5.8. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 6.5.9. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.6. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 6.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 6.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 6.7.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 6.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the ccoctl utility uses: The IAM Workload Identity Pool Admin role The following granular permissions: compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.7.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. If you plan to install the GCP Filestore Container Storage Interface (CSI) Driver Operator, retain this value. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 6.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 6.4. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 6.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.10. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 6.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.12. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager in disconnected environments . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 additionalTrustBundle: | 27 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 28 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_gcp/installing-restricted-networks-gcp-installer-provisioned
Appendix D. Trusted Artifact Signer components and version numbers
Appendix D. Trusted Artifact Signer components and version numbers The following tables list Red Hat's Trusted Artifact Signer (RHTAS) software components and their corresponding version numbers for the 1.1.1 release. Table D.1. Binaries Component Version cosign 2.4.1 gitsign 0.11.0 rekor-cli 1.3.7 ec 0.2.1 createtree 1.7.0 updatetree 1.7.0 tuftool 0.12.0 tuffer 0.17.1 Table D.2. Trillian Component Version logserver 1.7.0 logsigner 1.7.0 database 1.7.0 redis 1.7.0 Table D.3. Rekor Component Version rekor-server 1.3.7 backfill-redis 1.3.7 rekor-search-ui 1.3.7 Table D.4. Fulcio Component Version fulcio-server 1.6.5 Table D.5. Certificate Transparency Component Version certificate-transparency-go 1.3.0 Table D.6. Timestamp Authority Component Version timestamp-authority 1.2.3 Additional resources For more information about Trillian, see the project page on GitHub . For more information about Sigstore's Rekor, see the project page on GitHub . For more information about Sigstore's Fulcio, see the project page on GitHub . For more information about Sigstore's Timestamp Authority, see the project page on GitHub . For more information about TUF, see their home page hosted by the Linux Foundation.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1/html/deployment_guide/trusted-artifact-signer-components-and-version-numbers_deploy
Appendix B. Contact information
Appendix B. Contact information Red Hat Decision Manager documentation team: [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/author-group
Installing on IBM Cloud
Installing on IBM Cloud OpenShift Container Platform 4.15 Installing OpenShift Container Platform IBM Cloud Red Hat OpenShift Documentation Team
[ "ibmcloud plugin install cis", "ibmcloud cis instance-create <instance_name> standard-next 1", "ibmcloud cis instance-set <instance_name> 1", "ibmcloud cis domain-add <domain_name> 1", "ibmcloud plugin install cloud-dns-services", "ibmcloud dns instance-create <instance-name> standard-dns 1", "ibmcloud dns instance-target <instance-name>", "ibmcloud dns zone-create <zone-name> 1", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IC_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 10 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 11 fips: false 12 sshKey: ssh-ed25519 AAAA... 13", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IC_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: 9 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 11 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 12 fips: false 13 sshKey: ssh-ed25519 AAAA... 14", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IC_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 11 resourceGroupName: eu-gb-example-cluster-rg 12 networkResourceGroupName: eu-gb-example-existing-network-rg 13 vpcName: eu-gb-example-network-1 14 controlPlaneSubnets: 15 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 16 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IC_API_KEY=<api_key>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 12 resourceGroupName: eu-gb-example-cluster-rg 13 networkResourceGroupName: eu-gb-example-existing-network-rg 14 vpcName: eu-gb-example-network-1 15 controlPlaneSubnets: 16 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 17 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: Internal 18 pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "DEFAULT_SG=USD(ibmcloud is vpc <your_vpc_name> --output JSON | jq -r '.default_security_group.id')", "ibmcloud is security-group-rule-add USDDEFAULT_SG inbound tcp --remote 0.0.0.0/0 --port-min 443 --port-max 443", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export IC_API_KEY=<api_key>", "./openshift-install coreos print-stream-json", ".Example output ---- \"release\": \"415.92.202311241643-0\", \"formats\": { \"qcow2.gz\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.15-9.2/builds/415.92.202311241643-0/x86_64/rhcos-415.92.202311241643-0-ibmcloud.x86_64.qcow2.gz\", \"sha256\": \"6b562dee8431bec3b93adeac1cfefcd5e812d41e3b7d78d3e28319870ffc9eae\", \"uncompressed-sha256\": \"5a0f9479505e525a30367b6a6a6547c86a8f03136f453c1da035f3aa5daa8bc9\" ----", "mkdir <installation_directory>", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "vpcName: <existing_vpc> controlPlaneSubnets: <control_plane_subnet> computeSubnets: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "serviceEndpoints: - name: IAM url: <iam_alternate_endpoint_url> - name: VPC url: <vpc_alternate_endpoint_url> - name: ResourceController url: <resource_controller_alternate_endpoint_url> - name: ResourceManager url: <resource_manager_alternate_endpoint_url> - name: DNSServices url: <dns_services_alternate_endpoint_url> - name: COS url: <cos_alternate_endpoint_url> - name: GlobalSearch url: <global_search_alternate_endpoint_url> - name: GlobalTagging url: <global_tagging_alternate_endpoint_url>", "publish: Internal", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibm-cloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-east 12 resourceGroupName: us-east-example-cluster-rg 13 serviceEndpoints: 14 - name: IAM url: https://private.us-east.iam.cloud.ibm.com - name: VPC url: https://us-east.private.iaas.cloud.ibm.com/v1 - name: ResourceController url: https://private.us-east.resource-controller.cloud.ibm.com - name: ResourceManager url: https://private.us-east.resource-controller.cloud.ibm.com - name: DNSServices url: https://api.private.dns-svcs.cloud.ibm.com/v1 - name: COS url: https://s3.direct.us-east.cloud-object-storage.appdomain.cloud - name: GlobalSearch url: https://api.private.global-search-tagging.cloud.ibm.com - name: GlobalTagging url: https://tags.private.global-search-tagging.cloud.ibm.com networkResourceGroupName: us-east-example-existing-network-rg 15 vpcName: us-east-example-network-1 16 controlPlaneSubnets: 17 - us-east-example-network-1-cp-us-east-1 - us-east-example-network-1-cp-us-east-2 - us-east-example-network-1-cp-us-east-3 computeSubnets: 18 - us-east-example-network-1-compute-us-east-1 - us-east-example-network-1-compute-us-east-2 - us-east-example-network-1-compute-us-east-3 credentialsMode: Manual pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 additionalTrustBundle: | 22 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 23 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE=\"<path_to_image>/rhcos-<image_version>-ibmcloud.x86_64.qcow2.gz\"", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc apply -f ./oc-mirror-workspace/results-<id>/", "oc get imagecontentsourcepolicy", "oc get catalogsource --all-namespaces", "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "controlPlane: platform: ibmcloud: bootVolume: encryptionKey:", "compute: platform: ibmcloud: bootVolume: encryptionKey:", "platform: ibmcloud: defaultMachinePlatform: bootvolume: encryptionKey:", "platform: ibmcloud: resourceGroupName:", "platform: ibmcloud: serviceEndpoints: - name: url:", "platform: ibmcloud: networkResourceGroupName:", "platform: ibmcloud: dedicatedHosts: profile:", "platform: ibmcloud: dedicatedHosts: name:", "platform: ibmcloud: type:", "platform: ibmcloud: vpcName:", "platform: ibmcloud: controlPlaneSubnets:", "platform: ibmcloud: computeSubnets:", "ibmcloud is volumes --resource-group-name <infrastructure_id>", "ibmcloud is volume-delete --force <volume_id>", "export IC_API_KEY=<api_key>", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/installing_on_ibm_cloud/index
Chapter 3. OAuthAuthorizeToken [oauth.openshift.io/v1]
Chapter 3. OAuthAuthorizeToken [oauth.openshift.io/v1] Description OAuthAuthorizeToken describes an OAuth authorization token Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources clientName string ClientName references the client that created this token. codeChallenge string CodeChallenge is the optional code_challenge associated with this authorization code, as described in rfc7636 codeChallengeMethod string CodeChallengeMethod is the optional code_challenge_method associated with this authorization code, as described in rfc7636 expiresIn integer ExpiresIn is the seconds from CreationTime before this token expires. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata redirectURI string RedirectURI is the redirection associated with the token. scopes array (string) Scopes is an array of the requested scopes. state string State data from request userName string UserName is the user name associated with this token userUID string UserUID is the unique UID associated with this token. UserUID and UserName must both match for this token to be valid. 3.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthauthorizetokens DELETE : delete collection of OAuthAuthorizeToken GET : list or watch objects of kind OAuthAuthorizeToken POST : create an OAuthAuthorizeToken /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens GET : watch individual changes to a list of OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthauthorizetokens/{name} DELETE : delete an OAuthAuthorizeToken GET : read the specified OAuthAuthorizeToken PATCH : partially update the specified OAuthAuthorizeToken PUT : replace the specified OAuthAuthorizeToken /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens/{name} GET : watch changes to an object of kind OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/oauth.openshift.io/v1/oauthauthorizetokens Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OAuthAuthorizeToken Table 3.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.3. Body parameters Parameter Type Description body DeleteOptions schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthAuthorizeToken Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeTokenList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthAuthorizeToken Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.8. Body parameters Parameter Type Description body OAuthAuthorizeToken schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 202 - Accepted OAuthAuthorizeToken schema 401 - Unauthorized Empty 3.2.2. /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens Table 3.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/oauth.openshift.io/v1/oauthauthorizetokens/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the OAuthAuthorizeToken Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OAuthAuthorizeToken Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 202 - Accepted OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthAuthorizeToken Table 3.17. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthAuthorizeToken Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.19. Body parameters Parameter Type Description body Patch schema Table 3.20. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthAuthorizeToken Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. Body parameters Parameter Type Description body OAuthAuthorizeToken schema Table 3.23. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 401 - Unauthorized Empty 3.2.4. /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens/{name} Table 3.24. Global path parameters Parameter Type Description name string name of the OAuthAuthorizeToken Table 3.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/oauth_apis/oauthauthorizetoken-oauth-openshift-io-v1
17.16. Setting vLAN Tags
17.16. Setting vLAN Tags virtual local area network (vLAN) tags are added using the virsh net-edit command. This tag can also be used with PCI device assignment with SR-IOV devices. For more information, see Section 16.2.3, "Configuring PCI Assignment with SR-IOV Devices" . <network> <name>ovs-net</name> <forward mode='bridge'/> <bridge name='ovsbr0'/> <virtualport type='openvswitch'> <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> <vlan trunk='yes'> <tag id='42' nativeMode='untagged'/> <tag id='47'/> </vlan> <portgroup name='dontpanic'> <vlan> <tag id='42'/> </vlan> </portgroup> </network> Figure 17.30. vSetting VLAN tag (on supported network types only) If (and only if) the network type supports vlan tagging transparent to the guest, an optional <vlan> element can specify one or more vlan tags to apply to the traffic of all guests using this network. (openvswitch and type='hostdev' SR-IOV networks do support transparent vlan tagging of guest traffic; everything else, including standard linux bridges and libvirt's own virtual networks, do not support it. 802.1Qbh (vn-link) and 802.1Qbg (VEPA) switches provide their own way (outside of libvirt) to tag guest traffic onto specific vlans.) As expected, the tag attribute specifies which vlan tag to use. If a network has more than one <vlan> element defined, it is assumed that the user wants to do VLAN trunking using all the specified tags. If vlan trunking with a single tag is required, the optional attribute trunk='yes' can be added to the vlan element. For network connections using openvswitch it is possible to configure the 'native-tagged' and 'native-untagged' vlan modes. This uses the optional nativeMode attribute on the <tag> element: nativeMode may be set to 'tagged' or 'untagged'. The id attribute of the element sets the native vlan. <vlan> elements can also be specified in a <portgroup> element, as well as directly in a domain's <interface> element. If a vlan tag is specified in multiple locations, the setting in <interface> takes precedence, followed by the setting in the <portgroup> selected by the interface config. The <vlan> in <network> will be selected only if none is given in <portgroup> or <interface> .
[ "<network> <name>ovs-net</name> <forward mode='bridge'/> <bridge name='ovsbr0'/> <virtualport type='openvswitch'> <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> <vlan trunk='yes'> <tag id='42' nativeMode='untagged'/> <tag id='47'/> </vlan> <portgroup name='dontpanic'> <vlan> <tag id='42'/> </vlan> </portgroup> </network>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-setting_vlan_tags
probe::socket.recvmsg
probe::socket.recvmsg Name probe::socket.recvmsg - Message being received on socket Synopsis socket.recvmsg Values family Protocol family value name Name of this probe protocol Protocol value state Socket state value flags Socket flags value type Socket type value size Message size in bytes Context The message receiver. Description Fires at the beginning of receiving a message on a socket via the sock_recvmsg function
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-socket-recvmsg
Chapter 3. Connecting to Knative with Kamelets
Chapter 3. Connecting to Knative with Kamelets You can connect Kamelets to Knative destinations (channels or brokers). Red Hat OpenShift Serverless is based on the open source Knative project , which provides portability and consistency across hybrid and multi-cloud environments by enabling an enterprise-grade serverless platform. OpenShift Serverless includes support for the Knative Eventing and Knative Serving components. Red Hat OpenShift Serverless, Knative Eventing, and Knative Serving enable you to use an event-driven architecture with serverless applications, decoupling the relationship between event producers and consumers by using a publish-subscribe or event-streaming model. Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and consumers. These events conform to the CloudEvents specifications , which enables creating, parsing, sending, and receiving events in any programming language. You can use Kamelets to send CloudEvents to Knative and send them from Knative to event consumers. Kamelets can translate messages to CloudEvents and you can use them to apply any pre-processing and post-processing of the data within CloudEvents. 3.1. Overview of connecting to Knative with Kamelets If you use a Knative stream-processing framework, you can use Kamelets to connect services and applications to a Knative destination (channel or broker). Figure 3.1 illustrates the flow of connecting source and sink Kamelets to a Knative destination. Figure 3.1: Data flow with Kamelets and a Knative channel Here is an overview of the basic steps for using Kamelets and Kamelet Bindings to connect applications and services to a Knative destination: Set up Knative: Prepare your OpenShift cluster by installing the Camel K and OpenShift Serverless operators. Install the required Knative Serving and Eventing components. Create a Knative channel or broker. Determine which services or applications you want to connect to your Knative channel or broker. View the Kamelet Catalog to find the Kamelets for the source and sink components that you want to add to your integration. Also, determine the required configuration parameters for each Kamelet that you want to use. Create Kamelet Bindings: Create a Kamelet Binding that connects a source Kamelet to a Knative channel (or broker). Create a Kamelet Binding that connects the Knative channel (or broker) to a sink Kamelet. Optionally, manipulate the data that is passing between the Knative channel (or broker) and the data source or sink by adding one or more action Kamelets as intermediary steps within a Kamelet Binding. Optionally, define how to handle errors within a Kamelet Binding. Apply the Kamelet Bindings as resources to the project. The Camel K operator generates a separate Camel integration for each Kamelet Binding. When you configure a Kamelet Binding to use a Knative channel or a broker as the source of events, the Camel K operator materializes the corresponding integration as a Knative Serving service, to leverage the auto-scaling capabilities offered by Knative. 3.2. Setting up Knative Setting up Knative involves installing the required OpenShift operators and creating a Knative channel. 3.2.1. Preparing your OpenShift cluster To use Kamelets and OpenShift Serverless, install the following operators, components, and CLI tools: Red Hat Integration - Camel K operator and CLI tool - The operator installs and manages Camel K - a lightweight integration framework that runs natively in the cloud on OpenShift. The kamel CLI tool allows you to access all Camel K features. See the installation instructions in Installing Camel K . OpenShift Serverless operator - Provides a collection of APIs that enables containers, microservices, and functions to run "serverless". Serverless applications can scale up and down (to zero) on demand and be triggered by a number of event sources. When you install the OpenShift Serverless operator, it automatically creates the knative-serving namespace (for installing the Knative Serving component) and the knative-eventing namespace (required for installing the Knative Eventing component). Knative Eventing component Knative Serving component Knative CLI tool ( kn ) - Allows you to create Knative resources from the command line or from within Shell scripts. 3.2.1.1. Installing OpenShift Serverless You can install the OpenShift Serverless Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators. The OpenShift Serverless Operator supports both Knative Serving and Knative Eventing features. For more details, see installing OpenShift Serverless Operator . Prerequisites You have cluster administrator access to an OpenShift project in which the Camel K Operator is installed. You installed the OpenShift CLI tool ( oc ) so that you can interact with the OpenShift cluster at the command line. For details on how to install the OpenShift CLI, see Installing the OpenShift CLI . Procedure In the OpenShift Container Platform web console, log in by using an account with cluster administrator privileges. In the left navigation menu, click Operators > OperatorHub . In the Filter by keyword text box, enter Serverless to find the OpenShift Serverless Operator . Read the information about the Operator and then click Install to display the Operator subscription page. Select the default subscription settings: Update Channel > Select the channel that matches your OpenShift version, for example, 4.14 Installation Mode > All namespaces on the cluster Approval Strategy > Automatic Note The Approval Strategy > Manual setting is also available if required by your environment. Click Install , and wait a few moments until the Operator is ready for use. Install the required Knative components using the steps in the OpenShift documentation: Installing Knative Serving Installing Knative Eventing (Optional) Download and install the OpenShift Serverless CLI tool: From the Help menu (?) at the top of the OpenShift web console, select Command line tools . Scroll down to the kn - OpenShift Serverless - Command Line Interface section. Click the link to download the binary for your local operating system (Linux, Mac, Windows) Unzip and install the CLI in your system path. To verify that you can access the kn CLI, open a command window and then type the following: kn --help This command shows information about OpenShift Serverless CLI commands. For more details, see the OpenShift Serverless CLI documentation . Additional resources Installing OpenShift Serverless in the OpenShift documentation 3.2.2. Creating a Knative channel A Knative channel is a custom resource that forwards events. After events have been sent to a channel from an event source or producer, these events can be sent to multiple Knative services, or other sinks, by using a subscription. This example uses an InMemoryChannel channel, which you use with OpenShift Serverless for development purposes. Note that InMemoryChannel type channels have the following limitations: No event persistence is available. If a pod goes down, events on that pod are lost. InMemoryChannel channels do not implement event ordering, so two events that are received in the channel at the same time can be delivered to a subscriber in any order. If a subscriber rejects an event, there are no re-delivery attempts by default. You can configure re-delivery attempts by modifying the delivery spec in the Subscription object. Prerequisites The OpenShift Serverless operator, Knative Eventing, and Knative Serving components are installed on your OpenShift Container Platform cluster. You have installed the OpenShift Serverless CLI ( kn ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Log in to your OpenShift cluster. Open the project in which you want to create your integration application. For example: oc project camel-k-knative Create a channel by using the Knative ( kn ) CLI command kn channel create <channel_name> --type <channel_type> For example, to create a channel named mychannel : kn channel create mychannel --type messaging.knative.dev:v1:InMemoryChannel To confirm that the channel now exists, type the following command to list all existing channels: kn channel list You should see your channel in the list. steps Connecting a data source to a Knative destination in a Kamelet Binding Connecting a Knative destination to a data sink in a Kamelet Binding 3.2.3. Creating a Knative broker A Knative broker is a custom resource that defines an event mesh for collecting a pool of CloudEvents. OpenShift Serverless provides a default Knative broker that you can create by using the kn CLI. You can use a broker in a Kamelet Binding, for example, when your application handles multiple event types and you do not want to create a channel for each event type. Prerequisites The OpenShift Serverless operator, Knative Eventing, and Knative Serving components are installed on your OpenShift Container Platform cluster. You have installed the OpenShift Serverless CLI ( kn ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Log in to your OpenShift cluster. Open the project in which you want to create your integration application. For example: oc project camel-k-knative Create the broker by using this Knative ( kn ) CLI command: kn broker create default To confirm that the broker now exists, type the following command to list all existing brokers: kn broker list You should see the default broker in the list. steps Connecting a data source to a Knative destination in a Kamelet Binding Connecting a Knative destination to a data sink in a Kamelet Binding 3.3. Connecting a data source to a Knative destination in a Kamelet Binding To connect a data source to a Knative destination (channel or broker), you create a Kamelet Binding as illustrated in Figure 3.2 . Figure 3.2 Connecting a data source to a Knative destination The Knative destination can be a Knative channel or a Knative broker. When you send data to a channel, there is only one event type for the channel. You do not need to specify any property values for the channel in a Kamelet Binding. When you send data to a broker, because the broker can handle more than one event type, you must specify a value for the type property when you reference the broker in a Kamelet Binding. Prerequisites You know the name and type of the Knative channel or broker to which you want to send events. The example in this procedure uses the InMemoryChannel channel named mychannel or the broker named default . For the broker example, the type property value is coffee for coffee events. You know which Kamelet you want to add to your Camel integration and the required instance parameters. The example Kamelet for this procedure is the coffee-source Kamelet. It has an optional parameter, period , that specifies how often to send each event. You can copy the code from Example source Kamelet to a file named coffee-source.kamelet.yaml file and then run the following command to add it as a resource to your namespace: oc apply -f coffee-source.kamelet.yaml Procedure To connect a data source to a Knative destination, create a Kamelet Binding: In an editor of your choice, create a YAML file with the following basic structure: Add a name for the Kamelet Binding. For this example, the name is coffees-to-knative because the binding connects the coffee-source Kamelet to a Knative destination. For the Kamelet Binding's source, specify a data source Kamelet (for example, the coffee-source Kamelet produces events that contain data about coffee) and configure any parameters for the Kamelet. For the Kamelet Binding's sink specify the Knative channel or broker and the required parameters. This example specifies a Knative channel as the sink: This example specifies a Knative broker as the sink: Save the YAML file (for example, coffees-to-knative.yaml ). Log into your OpenShift project. Add the Kamelet Binding as a resource to your OpenShift namespace: oc apply -f <kamelet binding filename> For example: oc apply -f coffees-to-knative.yaml The Camel K operator generates and runs a Camel K integration by using the KameletBinding resource. It might take a few minutes to build. To see the status of the KameletBinding : oc get kameletbindings To see the status of their integrations: oc get integrations To view the integration's log: kamel logs <integration> -n <project> For example: kamel logs coffees-to-knative -n my-camel-knative steps Connecting a Knative destination to a data sink in a Kamelet Binding See also Applying operations to data within a connection Handling errors within a connection 3.4. Connecting a Knative destination to a data sink in a Kamelet Binding To connect a Knative destination to a data sink, you create a Kamelet Binding as illustrated in Figure 3.3 . Figure 3.3 Connecting a Knative destination to a data sink The Knative destination can be a Knative channel or a Knative broker. When you send data from a channel, there is only one event type for the channel. You do not need to specify any property values for the channel in a Kamelet Binding. When you send data from a broker, because the broker can handle more than one event type, you must specify a value for the type property when you reference the broker in a Kamelet Binding. Prerequisites You know the name and type of the Knative channel or the name of the broker from which you want to receive events. For a broker, you also know the type of events that you want to receive. The example in this procedure uses the InMemoryChannel channel named mychannel or the broker named mybroker and coffee events (for the type property). These are the same example destinations that are used to receive events from the coffee source in Connecting a data source to a Knative channel in a Kamelet Binding . You know which Kamelet you want to add to your Camel integration and the required instance parameters. The example Kamelet for this procedure is the log-sink Kamelet that is provided in the Kamelet Catalog and is useful for testing and debugging. The showStreams parameter specified to show the message body of the data. Procedure To connect a Knative channel to a data sink, create a Kamelet Binding: In an editor of your choice, create a YAML file with the following basic structure: Add a name for the Kamelet Binding. For this example, the name is knative-to-log because the binding connects the Knative destination to the log-sink Kamelet. For the Kamelet Binding's source, specify the Knative channel or broker and the required parameters. This example specifies a Knative channel as the source: This example specifies a Knative broker as the source: For the Kamelet Binding's sink, specify the data consumer Kamelet (for example, the log-sink Kamelet) and configure any parameters for the Kamelet, for example: Save the YAML file (for example, knative-to-log.yaml ). Log into your OpenShift project. Add the Kamelet Binding as a resource to your OpenShift namespace: oc apply -f <kamelet binding filename> For example: oc apply -f knative-to-log.yaml The Camel K operator generates and runs a Camel K integration by using the KameletBinding resource. It might take a few minutes to build. To see the status of the KameletBinding : oc get kameletbindings To see the status of the integration: oc get integrations To view the integration's log: kamel logs <integration> -n <project> For example: kamel logs knative-to-log -n my-camel-knative In the output, you should see coffee events, for example: To stop a running integration, delete the associated Kamelet Binding resource: oc delete kameletbindings/<kameletbinding-name> For example: oc delete kameletbindings/knative-to-log See also Applying operations to data within a connection Handling errors within a connection
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: spec: source: sink:", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffees-to-knative spec: source: sink:", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffees-to-knative spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: coffee-source properties: period: 5000 sink:", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffees-to-knative spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: coffee-source properties: period: 5000 sink: ref: apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel name: mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffees-to-knative spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: coffee-source properties: period: 5000 sink: ref: kind: Broker apiVersion: eventing.knative.dev/v1 name: default properties: type: coffee", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: spec: source: sink:", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: knative-to-log spec: source: sink:", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: knative-to-log spec: source: ref: apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel name: mychannel sink:", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: knative-to-log spec: source: ref: kind: Broker apiVersion: eventing.knative.dev/v1 name: default properties: type: coffee sink:", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: knative-to-log spec: source: ref: apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel name: mychannel sink: ref: apiVersion: camel.apache.org/v1alpha1 kind: Kamelet name: log-sink properties: showStreams: true", "[1] INFO [sink] (vert.x-worker-thread-1) {\"id\":254,\"uid\":\"8e180ef7-8924-4fc7-ab81-d6058618cc42\",\"blend_name\":\"Good-morning Star\",\"origin\":\"Santander, Colombia\",\"variety\":\"Kaffa\",\"notes\":\"delicate, creamy, lemongrass, granola, soil\",\"intensifier\":\"sharp\"} [1] INFO [sink] (vert.x-worker-thread-2) {\"id\":8169,\"uid\":\"3733c3a5-4ad9-43a3-9acc-d4cd43de6f3d\",\"blend_name\":\"Caf? Java\",\"origin\":\"Nayarit, Mexico\",\"variety\":\"Red Bourbon\",\"notes\":\"unbalanced, full, granola, bittersweet chocolate, nougat\",\"intensifier\":\"delicate\"}" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/integrating_applications_with_kamelets/connecting-to-knative-kamelets
A.10. VDSM Hook Return Codes
A.10. VDSM Hook Return Codes Hook scripts must return one of the return codes shown in hook return codes . The return code will determine whether further hook scripts are processed by VDSM. Table A.3. Hook Return Codes Code Description 0 The hook script ended successfully 1 The hook script failed, other hooks should be processed 2 The hook script failed, no further hooks should be processed >2 Reserved
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/VDSM_hooks_return_codes
Chapter 16. Uninstalling Fuse on OpenShift
Chapter 16. Uninstalling Fuse on OpenShift To uninstall Fuse on OpenShift, remove the imagestreams and templates from registry.redhat.io , with the oc delete command. 16.1. Uninstalling Fuse imagestreams and templates on the OpenShift 4.x server Procedure + Find the BASEURL for your version and define it as a variable for use in commands below. + Delete the Spring Boot 2 quickstart templates. Delete Fuse on OpenShift quickstart templates. Delete the imagestreams. Remove items in the Samples Operator. Edit the configuration of Samples Operator: Remove the Fuse and Spring Boot 2 templates from the skippedImagestreams and skippedTemplates section. Built-in imagestreams Some imagestreams and templates are built-in for common use cases. These are managed by the Sample Operator, so you cannot remove them manually. You can ignore them while uninstalling. Built-in imagestreams are configured with the samples.operator.openshift.io/managed: "true" label in the manifest, so you can verify if it is managed with the oc get and grep commands. Example
[ "BASEURL=https://raw.githubusercontent.com/jboss-fuse/application-templates/application-templates-2.1.0.fuse-7_13_0-00014-redhat-00001", "for template in spring-boot-2-camel-amq-template.json spring-boot-2-camel-config-template.json spring-boot-2-camel-drools-template.json spring-boot-2-camel-infinispan-template.json spring-boot-2-camel-rest-3scale-template.json spring-boot-2-camel-rest-sql-template.json spring-boot-2-camel-template.json spring-boot-2-camel-xa-template.json spring-boot-2-camel-xml-template.json spring-boot-2-cxf-jaxrs-template.json spring-boot-2-cxf-jaxws-template.json spring-boot-2-cxf-jaxrs-xml-template.json spring-boot-2-cxf-jaxws-xml-template.json ; do oc delete -n openshift -f USD{BASEURL}/quickstarts/USD{template} done", "for template in eap-camel-amq-template.json eap-camel-cdi-template.json eap-camel-cxf-jaxrs-template.json eap-camel-cxf-jaxws-template.json karaf-camel-amq-template.json karaf-camel-log-template.json karaf-camel-rest-sql-template.json karaf-cxf-rest-template.json ; do oc delete -n openshift -f USD{BASEURL}/quickstarts/USD{template} done", "delete -n openshift -f USD{BASEURL}/fis-image-streams.json", "edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator", "]USD oc get is fuse7-eap-openshift -n openshift -o yaml | grep 'samples.operator.openshift.io/managed' samples.operator.openshift.io/managed: \"true\" ]USD" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/fuse_on_openshift_guide/uninstalling-fuse-on-openshift
Configuring and managing cloud-init for RHEL 9
Configuring and managing cloud-init for RHEL 9 Red Hat Enterprise Linux 9 Using cloud-init to automate the initialization of cloud instances Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_cloud-init_for_rhel_9/index
Chapter 24. Automating System Tasks
Chapter 24. Automating System Tasks You can configure Red Hat Enterprise Linux to automatically run tasks, also known as jobs : regularly at specified time using cron , see Section 24.1, "Scheduling a Recurring Job Using Cron" asynchronously at certain days using anacron , see Section 24.2, "Scheduling a Recurring Asynchronous Job Using Anacron" once at a specific time using at , see Section 24.3, "Scheduling a Job to Run at a Specific Time Using at" once when system load average drops to a specified value using batch , see Section 24.4, "Scheduling a Job to Run on System Load Drop Using batch" once on the boot, see Section 24.5, "Scheduling a Job to Run on Boot Using a systemd Unit File" This chapter describes how to perform these tasks. 24.1. Scheduling a Recurring Job Using Cron Cron is a service that enables you to schedule running a task, often called a job, at regular times. A cron job is only executed if the system is running on the scheduled time. For scheduling jobs that can postpone their execution to when the system boots up, so a job is not "lost" if the system is not running, see Section 24.3, "Scheduling a Job to Run at a Specific Time Using at" . Users specify cron jobs in cron table files, also called crontab files. These files are then read by the crond service, which executes the jobs. 24.1.1. Prerequisites for Cron Jobs Before scheduling a cron job: Install the cronie package: The crond service is enabled - made to start automatically at boot time - upon installation. If you disabled the service, enable it: Start the crond service for the current session: (optional) Configure cron . For example, you can change: shell to be used when executing jobs the PATH environment variable mail addressee if a job sends emails. See the crontab (5) manual page for information on configuring cron . 24.1.2. Scheduling a Cron Job Scheduling a Job as root User The root user uses the cron table in /etc/crontab , or, preferably, creates a cron table file in /etc/cron.d/ . Use this procedure to schedule a job as root : Choose: in which minutes of an hour to execute the job. For example, use 0,10,20,30,40,50 or 0/10 to specify every 10 minutes of an hour. in which hours of a day to execute the job. For example, use 17-20 to specify time from 17:00 to 20:59. in which days of a month to execute the job. For example, use 15 to specify 15th day of a month. in which months of a year to execute the job. For example, use Jun,Jul,Aug or 6,7,8 to specify the summer months of the year. in which days of the week to execute the job. For example, use * for the job to execute independently of the day of week. Combine the chosen values into the time specification. The above example values result into this specification: 0,10,20,30,40,50 17-20 15 Jun,Jul,Aug * Specify the user. The job will execute as if run by this user. For example, use root . Specify the command to execute. For example, use /usr/local/bin/my-script.sh Put the above specifications into a single line: Add the resulting line to /etc/crontab , or, preferably, create a cron table file in /etc/cron.d/ and add the line there. The job will now run as scheduled. For full reference on how to specify a job, see the crontab (5) manual page. For basic information, see the beginning of the /etc/crontab file: Scheduling a Job as Non-root User Non-root users can use the crontab utility to configure cron jobs. The jobs will run as if executed by that user. To create a cron job as a specific user: From the user's shell, run: This will start editing of the user's own crontab file using the editor specified by the VISUAL or EDITOR environment variable. Specify the job in the same way as in the section called "Scheduling a Job as root User" , but leave out the field with user name. For example, instead of adding add: Save the file and exit the editor. (optional) To verify the new job, list the contents of the current user's crontab file by running: Scheduling Hourly, Daily, Weekly, and Monthly Jobs To schedule an hourly, daily, weekly, or monthly job: Put the actions you want your job to execute into a shell script. Put the shell script into one of the following directories: /etc/cron.hourly/ /etc/cron.daily/ /etc/cron.weekly/ /etc/cron.monthly/ From now, your script will be executed - the crond service automatically executes any scripts present in /etc/cron.hourly , /etc/cron.daily , /etc/cron.weekly , and /etc/cron.monthly directories at their corresponding times. 24.2. Scheduling a Recurring Asynchronous Job Using Anacron Anacron , like cron , is a service that enables you to schedule running a task, often called a job, at regular times. However, anacron differs from cron in two ways: If the system is not running at the scheduled time, an anacron job is postponed until the system is running; An anacron job can run once per day at most. Users specify anacron jobs in anacron table files, also called anacrontab files. These files are then read by the crond service, which executes the jobs. 24.2.1. Prerequisites for Anacrob Jobs Before scheduling an anacron job: Verify that you have the cronie-anacron package installed: The cronie-anacron is likely to be installed already, because it is a sub-package of the cronie package. If it is not installed, use this command: The crond service is enabled - made to start automatically at boot time - upon installation. If you disabled the service, enable it: Start the crond service for the current session: (optional) Configure anacron . For example, you can change: shell to be used when executing jobs the PATH environment variable mail addressee if a job sends emails. See the anacrontab (5) manual page for information on configuring anacron . Important By default, the anacron configuration includes a condition that prevents it from running if the computer is not plugged in. This setting ensures that the battery is not drained by running anacron jobs. If you want to allow anacron to run even if the computer runs on battery power, open the /etc/cron.hourly/0anacron file and comment out the following part: 24.2.2. Scheduling an Anacron Job Scheduling an anacron Job as root User The root user uses the anacron table in /etc/anacrontab . Use the following procedure to schedule a job as root . Scheduling an anacron Job as root User Choose: Frequency of executing the job. For example, use 1 to specify every day or 3 to specify once in 3 days. The delay of executing the job. For example, use 0 to specify no delay or 60 to specify 1 hour of delay. The job identifier, which will be used for logging. For example, use my.anacron.job to log the job with the my.anacron.job string. The command to execute. For example, use /usr/local/bin/my-script.sh Combine the chosen values into the job specification. Here is an example specification: Add the resulting line to /etc/anacrontab . The job will now run as scheduled. For simple job examples, see the /etc/anacrontab file. For full reference on how to specify a job, see the anacrontab (5) manual page. Scheduling Hourly, Daily, Weekly, and Monthly Jobs You can schedule daily, weekly, and monthly jobs with anacron . See the section called "Scheduling Hourly, Daily, Weekly, and Monthly Jobs" . 24.3. Scheduling a Job to Run at a Specific Time Using at To schedule a one-time task, also called a job, to run once at a specific time, use the at utility. Users specify at jobs using the at utility. The jobs are then executed by the atd service. 24.3.1. Prerequisites for At Jobs Before scheduling an at job: Install the at package: The atd service is enabled - made to start automatically at boot time - upon installation. If you disabled the service, enable it: Start the atd service for the current session: 24.3.2. Scheduling an At Job A job is always run by some user. Log in as the desired user and run: Replace time with the time specification. For details on specifying time, see the at (1) manual page and the /usr/share/doc/at/timespec file. Example 24.1. Specifying Time for At To execute the job at 15:00, run: If the specified time has passed, the job is executed at the same time the day. To execute the job on August 20 2017, run: or To execute the job 5 days from now, run: At the displayed at> prompt, enter the command to execute and press Enter: Repeat this step for every command you want to execute. Note The at> prompt shows which shell it will use: The at utility uses the shell set in user's SHELL environment variable, or the user's login shell, or /bin/sh , whichever is found first. Press Ctrl+D on an empty line to finish specifying the job. Note If the set of commands or the script tries to display information to standard output, the output is emailed to the user. Viewing Pending Jobs To view the list of pending jobs, use the atq command: Each job is listed on a separate line in the following format: The job_queue column specifies whether a job is an at or a batch job. a stands for at , b stands for batch . Non-root users only see their own jobs. The root user sees jobs for all users. Deleting a Scheduled Job To delete a scheduled job: List pending jobs with the atq command: Find the job you want to delete by its scheduled time and the user. Run the atrm command, specifying the job by its number: 24.3.2.1. Controlling Access to At and Batch You can restrict access to the at and batch commands for specific users. To do this, put user names into /etc/at.allow or /etc/at.deny according to these rules: Both access control files use the same format: one user name on each line. No white space is permitted in either file. If the at.allow file exists, only users listed in the file are allowed to use at or batch , and the at.deny file is ignored. If at.allow does not exist, users listed in at.deny are not allowed to use at or batch . The root user is not affected by the access control files and can always execute the at and batch commands. The at daemon ( atd ) does not have to be restarted if the access control files are modified. The access control files are read each time a user tries to execute the at or batch commands. 24.4. Scheduling a Job to Run on System Load Drop Using batch To schedule a one-time task, also called a job, to run when the system load average drops below the specified value, use the batch utility. This can be useful for performing resource-demanding tasks or for preventing the system from being idle. Users specify batch jobs using the batch utility. The jobs are then executed by the atd service. 24.4.1. Prerequisites for Batch Jobs The batch utility is provided in the at package, and batch jobs are managed by the atd service. Hence, the prerequisites for batch jobs are the same as for at jobs. See Section 24.3.1, "Prerequisites for At Jobs" . 24.4.2. Scheduling a Batch Job A job is always run by some user. Log in as the desired user and run: At the displayed at> prompt, enter the command to execute and press Enter: Repeat this step for every command you want to execute. Note The at> prompt shows which shell it will use: The batch utility uses the shell set in user's SHELL environment variable, or the user's login shell, or /bin/sh , whichever is found first. Press Ctrl+D on an empty line to finish specifying the job. Note If the set of commands or the script tries to display information to standard output, the output is emailed to the user. Changing the Default System Load Average Limit By default, batch jobs start when system load average drops below 0.8. This setting is kept in the atq service. To change the system load limit: To the /etc/sysconfig/atd file, add this line: Substitute x with the new load average. For example: Restart the atq service: Viewing Pending Jobs To view the list of pending jobs, use the atq command. See the section called "Viewing Pending Jobs" . Deleting a Scheduled Job To delete a scheduled job, use the atrm command. See the section called "Deleting a Scheduled Job" . Controlling Access to Batch You can also restrict the usage of the batch utility. This is done for the batch and at utilities together. See Section 24.3.2.1, "Controlling Access to At and Batch" . 24.5. Scheduling a Job to Run on Boot Using a systemd Unit File The cron , anacron , at , and batch utilities allow scheduling jobs for specific times or for when system workload reaches a certain level. It is also possible to create a job that will run during the system boot. This is done by creating a systemd unit file that specifies the script to run and its dependencies. To configure a script to run on the boot: Create the systemd unit file that specifies at which stage of the boot process to run the script. This example shows a unit file with a reasonable set of Wants= and After= dependencies: If you use this example: substitute /usr/local/bin/foobar.sh with the name of your script modify the set of After= entries if necessary For information on specifying the stage of boot, see Section 10.6, "Creating and Modifying systemd Unit Files" . If you want the systemd service to stay active after executing the script, add the RemainAfterExit=yes line to the [Service] section: Reload the systemd daemon: Enable the systemd service: Create the script to execute: If you want the script to run during the boot only, and not on every boot, add a line that disables the systemd unit: Make the script executable: 24.6. Additional Resources For more information on automating system tasks on Red Hat Enterprise Linux, see the resources listed below. Installed Documentation cron - The manual page for the crond daemon documents how crond works and how to change its behavior. crontab - The manual page for the crontab utility provides a complete list of supported options. crontab (5) - This section of the manual page for the crontab utility documents the format of crontab files.
[ "~]# yum install cronie", "~]# systemctl enable crond.service", "~]# systemctl start crond.service", "0,10,20,30,40,50 17-20 15 Jun,Jul,Aug * root /usr/local/bin/my-script.sh", "SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root For details see man 4 crontabs Example of job definition: .---------------- minute (0 - 59) | .------------- hour (0 - 23) | | .---------- day of month (1 - 31) | | | .------- month (1 - 12) OR jan,feb,mar,apr | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat | | | | | * * * * * user-name command to be executed", "[bob@localhost ~]USD crontab -e", "0,10,20,30,40,50 17-20 15 Jun,Jul,Aug * bob /home/bob/bin/script.sh", "0,10,20,30,40,50 17-20 15 Jun,Jul,Aug * /home/bob/bin/script.sh", "[bob@localhost ~]USD crontab -l @daily /home/bob/bin/script.sh", "~]# rpm -q cronie-anacron", "~]# yum install cronie-anacron", "~]# systemctl enable crond.service", "~]# systemctl start crond.service", "Do not run jobs when on battery power online=1 for psupply in AC ADP0 ; do sysfile=\"/sys/class/power_supply/USDpsupply/online\" if [ -f USDsysfile ] ; then if [ `cat USDsysfile 2>/dev/null`x = 1x ]; then online=1 break else online=0 fi fi done", "3 60 cron.daily /usr/local/bin/my-script.sh", "~]# yum install at", "~]# systemctl enable atd.service", "~]# systemctl start atd.service", "~]# at time", "~]# at 15:00", "~]# at August 20 2017", "~]# at 082017", "~]# now + 5 days", "~]# at 15:00 at> sh /usr/local/bin/my-script.sh at>", "warning: commands will be executed using /bin/sh", "~]# atq 26 Thu Feb 23 15:00:00 2017 a root 28 Thu Feb 24 17:30:00 2017 a root", "job_number scheduled_date scheduled_hour job_class user_name", "~]# atq 26 Thu Feb 23 15:00:00 2017 a root 28 Thu Feb 24 17:30:00 2017 a root", "~]# atrm 26", "~]# batch", "~]# batch at> sh /usr/local/bin/my-script.sh", "warning: commands will be executed using /bin/sh", "OPTS='-l x '", "OPTS='-l 0.5'", "systemctl restart atq", "~]# cat /etc/systemd/system/one-time.service The script needs to execute after: network interfaces are configured Wants=network-online.target After=network-online.target all remote filesystems (NFS/_netdev) are mounted After=remote-fs.target name (DNS) and user resolution from remote databases (AD/LDAP) are available After=nss-user-lookup.target nss-lookup.target the system clock has synchronized After=time-sync.target [Service] Type=oneshot ExecStart=/usr/local/bin/foobar.sh [Install] WantedBy=multi-user.target", "[Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/local/bin/foobar.sh", "~]# systemctl daemon-reload", "~]# systemctl enable one-time.service", "~]# cat /usr/local/bin/foobar.sh #!/bin/bash touch /root/test_file", "#!/bin/bash touch /root/test_file systemctl disable one-time.service", "~]# chmod +x /usr/local/bin/foobar.sh" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-automating_system_tasks
Chapter 5. Pools, placement groups, and CRUSH configuration
Chapter 5. Pools, placement groups, and CRUSH configuration As a storage administrator, you can choose to use the Red Hat Ceph Storage default options for pools, placement groups, and the CRUSH algorithm or customize them for the intended workload. Prerequisites Installation of the Red Hat Ceph Storage software. 5.1. Pools placement groups and CRUSH When you create pools and set the number of placement groups for the pool, Ceph uses default values when you do not specifically override the defaults. Important Red Hat recommends overriding some of the defaults. Specifically, set a pool's replica size and override the default number of placement groups. You can set these values when running pool commands. By default, Ceph makes 3 replicas of objects. If you want to set 4 copies of an object as the default value, a primary copy and three replica copies, reset the default values as shown in osd_pool_default_size . If you want to allow Ceph to write a lesser number of copies in a degraded state, set osd_pool_default_min_size to a number less than the osd_pool_default_size value. Example Ensure you have a realistic number of placement groups. Red Hat recommends approximately 100 per OSD. For example, total number of OSDs multiplied by 100 divided by the number of replicas, that is, osd_pool_default_size . For 10 OSDs and osd_pool_default_size = 4, we would recommend approximately (100 * 10) / 4 = 250. Example Additional resources See all the Red Hat Ceph Storage pool, placement group, and CRUSH configuration options in l Appendix E for specific option descriptions and usage.
[ "ceph config set global osd_pool_default_size 4 # Write an object 4 times. ceph config set global osd_pool_default_min_size 1 # Allow writing one copy in a degraded state.", "ceph config set global osd_pool_default_pg_num 250 ceph config set global osd_pool_default_pgp_num 250" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/configuration_guide/pools-placement-groups-and-crush-configuration
Chapter 3. Getting started
Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites To build the example, Maven must be configured to use the Red Hat repository or a local repository . You must install the examples . You must have a message broker listening for connections on localhost . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named exampleQueue . For more information, see Creating a queue . 3.2. Running your first example The example creates a consumer and producer for a queue named exampleQueue . It sends a text message and then receives it back, printing the received message to the console. Procedure Use Maven to build the examples by running the following command in the <install-dir> /examples/protocols/openwire/queue directory. USD mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests The addition of dependency:copy-dependencies results in the dependencies being copied into the target/dependency directory. Use the java command to run the example. On Linux or UNIX: USD java -cp "target/classes:target/dependency/*" org.apache.activemq.artemis.jms.example.QueueExample On Windows: > java -cp "target\classes;target\dependency\*" org.apache.activemq.artemis.jms.example.QueueExample Running it on Linux results in the following output: USD java -cp "target/classes:target/dependency/*" org.apache.activemq.artemis.jms.example.QueueExample Sent message: This is a text message Received message: This is a text message The source code for the example is in the <install-dir> /examples/protocols/openwire/queue/src directory. Additional examples are available in the <install-dir> /examples/protocols/openwire directory.
[ "mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests", "java -cp \"target/classes:target/dependency/*\" org.apache.activemq.artemis.jms.example.QueueExample", "> java -cp \"target\\classes;target\\dependency\\*\" org.apache.activemq.artemis.jms.example.QueueExample", "java -cp \"target/classes:target/dependency/*\" org.apache.activemq.artemis.jms.example.QueueExample Sent message: This is a text message Received message: This is a text message" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_openwire_jms_client/getting_started
Chapter 2. Installing Capsule Server
Chapter 2. Installing Capsule Server Before you install Capsule Server, you must ensure that your environment meets the requirements for installation. For more information, see Preparing your Environment for Installation . 2.1. Registering to Satellite Server Use this procedure to register the base operating system on which you want to install Capsule Server to Satellite Server. Red Hat subscription manifest prerequisites On Satellite Server, a manifest must be installed and it must contain the appropriate repositories for the organization you want Capsule to belong to. The manifest must contain repositories for the base operating system on which you want to install Capsule, as well as any clients that you want to connect to Capsule. The repositories must be synchronized. For more information on manifests and repositories, see Managing Red Hat Subscriptions in Managing content . Proxy and network prerequisites The Satellite Server base operating system must be able to resolve the host name of the Capsule base operating system and vice versa. Ensure HTTPS connection using client certificate authentication is possible between Capsule Server and Satellite Server. HTTP proxies between Capsule Server and Satellite Server are not supported. You must configure the host and network-based firewalls accordingly. For more information, see Port and firewall requirements in Installing Capsule Server . You can register hosts with Satellite using the host registration feature in the Satellite web UI, Hammer CLI, or the Satellite API. For more information, see Registering Hosts in Managing hosts . Procedure In the Satellite web UI, navigate to Hosts > Register Host . From the Activation Keys list, select the activation keys to assign to your host. Click Generate to create the registration command. Click on the files icon to copy the command to your clipboard. Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. CLI procedure Generate the host registration command using the Hammer CLI: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. API procedure Generate the host registration command using the Satellite API: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Use an activation key to simplify specifying the environments. For more information, see Managing Activation Keys in Managing content . To enter a password as a command line argument, use username:password syntax. Keep in mind this can save the password in the shell history. Alternatively, you can use a temporary personal access token instead of a password. To generate a token in the Satellite web UI, navigate to My Account > Personal Access Tokens . Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. 2.2. Configuring repositories Use this procedure to enable the repositories that are required to install Capsule Server. Disable all repositories: Enable the following repositories: Enable the module: Note If there is any warning about conflicts with Ruby or PostgreSQL while enabling satellite-capsule:el8 module, see Troubleshooting DNF modules . For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Lifecycle . Note If you are installing Capsule Server as a virtual machine hosted on Red Hat Virtualization, you must also enable the Red Hat Common repository and then install Red Hat Virtualization guest agents and drivers. For more information, see Installing the Guest Agents and Drivers on Red Hat Enterprise Linux in the Virtual Machine Management Guide . Optional: Verify that the required repositories are enabled: 2.3. Optional: Using fapolicyd on Capsule Server By enabling fapolicyd on your Satellite Server, you can provide an additional layer of security by monitoring and controlling access to files and directories. The fapolicyd daemon uses the RPM database as a repository of trusted binaries and scripts. You can turn on or off the fapolicyd on your Satellite Server or Capsule Server at any point. 2.3.1. Installing fapolicyd on Capsule Server You can install fapolicyd along with Capsule Server or can be installed on an existing Capsule Server. If you are installing fapolicyd along with the new Capsule Server, the installation process will detect the fapolicyd in your Red Hat Enterprise Linux host and deploy the Capsule Server rules automatically. Prerequisites Ensure your host has access to the BaseOS repositories of Red Hat Enterprise Linux. Procedure For a new installation, install fapolicyd: For an existing installation, install fapolicyd using satellite-maintain packages install: Start the fapolicyd service: Verification Verify that the fapolicyd service is running correctly: New Satellite Server or Capsule Server installations In case of new Satellite Server or Capsule Server installation, follow the standard installation procedures after installing and enabling fapolicyd on your Red Hat Enterprise Linux host. Additional resources For more information on fapolicyd, see Blocking and allowing applications using fapolicyd in Red Hat Enterprise Linux 8 Security hardening . 2.4. Installing Capsule Server packages Before installing Capsule Server packages, you must update all packages that are installed on the base operating system. Procedure To install Capsule Server, complete the following steps: Update all packages: Install the Satellite Server packages: 2.5. Synchronizing the system clock with chronyd To minimize the effects of time drift, you must synchronize the system clock on the base operating system on which you want to install Capsule Server with Network Time Protocol (NTP) servers. If the base operating system clock is configured incorrectly, certificate verification might fail. For more information about the chrony suite, see Using the Chrony suite to configure NTP in Red Hat Enterprise Linux 8 Configuring basic system settings . Procedure Install the chrony package: Start and enable the chronyd service: 2.6. Configuring Capsule Server with SSL certificates Red Hat Satellite uses SSL certificates to enable encrypted communications between Satellite Server, external Capsule Servers, and all hosts. Depending on the requirements of your organization, you must configure your Capsule Server with a default or custom certificate. If you use a default SSL certificate, you must also configure each external Capsule Server with a distinct default SSL certificate. For more information, see Section 2.6.1, "Configuring Capsule Server with a default SSL certificate" . If you use a custom SSL certificate, you must also configure each external Capsule Server with a distinct custom SSL certificate. For more information, see Section 2.6.2, "Configuring Capsule Server with a custom SSL certificate" . 2.6.1. Configuring Capsule Server with a default SSL certificate Use this section to configure Capsule Server with an SSL certificate that is signed by Satellite Server default Certificate Authority (CA). Prerequisites Capsule Server is registered to Satellite Server. For more information, see Registering to Satellite Server . Capsule Server packages are installed. For more information, see Installing Capsule Server Packages . Procedure On Satellite Server, to store all the source certificate files for your Capsule Server, create a directory that is accessible only to the root user, for example /root/capsule_cert : On Satellite Server, generate the /root/capsule_cert/ capsule.example.com -certs.tar certificate archive for your Capsule Server: Retain a copy of the satellite-installer command that the capsule-certs-generate command returns for deploying the certificate to your Capsule Server. Example output of capsule-certs-generate On Satellite Server, copy the certificate archive file to your Capsule Server: On Capsule Server, to deploy the certificate, enter the satellite-installer command that the capsule-certs-generate command returns. When network connections or ports to Satellite are not yet open, you can set the --foreman-proxy-register-in-foreman option to false to prevent Capsule from attempting to connect to Satellite and reporting errors. Run the installer again with this option set to true when the network and firewalls are correctly configured. Important Do not delete the certificate archive file after you deploy the certificate. It is required, for example, when upgrading Capsule Server. 2.6.2. Configuring Capsule Server with a custom SSL certificate If you configure Satellite Server to use a custom SSL certificate, you must also configure each of your external Capsule Servers with a distinct custom SSL certificate. To configure your Capsule Server with a custom certificate, complete the following procedures on each Capsule Server: Section 2.6.2.1, "Creating a custom SSL certificate for Capsule Server" Section 2.6.2.2, "Deploying a custom SSL certificate to Capsule Server" Section 2.6.2.3, "Deploying a custom SSL certificate to hosts" 2.6.2.1. Creating a custom SSL certificate for Capsule Server On Satellite Server, create a custom certificate for your Capsule Server. If you already have a custom SSL certificate for Capsule Server, skip this procedure. Procedure To store all the source certificate files, create a directory that is accessible only to the root user: Create a private key with which to sign the certificate signing request (CSR). Note that the private key must be unencrypted. If you use a password-protected private key, remove the private key password. If you already have a private key for this Capsule Server, skip this step. Create the /root/capsule_cert/openssl.cnf configuration file for the CSR and include the following content: Optional: If you want to add Distinguished Name (DN) details to the CSR, add the following information to the [ req_distinguished_name ] section: 1 Two letter code 2 Full name 3 Full name (example: New York) 4 Division responsible for the certificate (example: IT department) Generate CSR: 1 Path to the private key 2 Path to the configuration file 3 Path to the CSR to generate Send the certificate signing request to the certificate authority (CA). The same CA must sign certificates for Satellite Server and Capsule Server. When you submit the request, specify the lifespan of the certificate. The method for sending the certificate request varies, so consult the CA for the preferred method. In response to the request, you can expect to receive a CA bundle and a signed certificate, in separate files. 2.6.2.2. Deploying a custom SSL certificate to Capsule Server Use this procedure to configure your Capsule Server with a custom SSL certificate signed by a Certificate Authority. The satellite-installer command, which the capsule-certs-generate command returns, is unique to each Capsule Server. Do not use the same command on more than one Capsule Server. Prerequisites Satellite Server is configured with a custom certificate. For more information, see Configuring Satellite Server with a Custom SSL Certificate in Installing Satellite Server in a connected network environment . Capsule Server is registered to Satellite Server. For more information, see Registering to Satellite Server . Capsule Server packages are installed. For more information, see Installing Capsule Server Packages . Procedure On your Satellite Server, generate a certificate bundle: 1 Path to Capsule Server certificate file that is signed by a Certificate Authority. 2 Path to the private key that was used to sign Capsule Server certificate. 3 Path to the Certificate Authority bundle. Retain a copy of the satellite-installer command that the capsule-certs-generate command returns for deploying the certificate to your Capsule Server. Example output of capsule-certs-generate On your Satellite Server, copy the certificate archive file to your Capsule Server: On your Capsule Server, to deploy the certificate, enter the satellite-installer command that the capsule-certs-generate command returns. If network connections or ports to Satellite are not yet open, you can set the --foreman-proxy-register-in-foreman option to false to prevent Capsule from attempting to connect to Satellite and reporting errors. Run the installer again with this option set to true when the network and firewalls are correctly configured. Important Do not delete the certificate archive file after you deploy the certificate. It is required, for example, when upgrading Capsule Server. 2.6.2.3. Deploying a custom SSL certificate to hosts After you configure Satellite to use a custom SSL certificate, you must deploy the certificate to hosts registered to Satellite. Procedure Update the SSL certificate on each host: 2.7. Assigning the correct organization and location to Capsule Server in the Satellite web UI After installing Capsule Server packages, if there is more than one organization or location, you must assign the correct organization and location to Capsule to make Capsule visible in the Satellite web UI. Note Assigning a Capsule to the same location as your Satellite Server with an embedded Capsule prevents Red Hat Insights from uploading the Insights inventory. To enable the inventory upload, synchronize SSH keys for both Capsules. Procedure Log into the Satellite web UI. From the Organization list in the upper-left of the screen, select Any Organization . From the Location list in the upper-left of the screen, select Any Location . In the Satellite web UI, navigate to Hosts > All Hosts and select Capsule Server. From the Select Actions list, select Assign Organization . From the Organization list, select the organization where you want to assign this Capsule. Click Fix Organization on Mismatch . Click Submit . Select Capsule Server. From the Select Actions list, select Assign Location . From the Location list, select the location where you want to assign this Capsule. Click Fix Location on Mismatch . Click Submit . In the Satellite web UI, navigate to Administer > Organizations and click the organization to which you have assigned Capsule. Click Capsules tab and ensure that Capsule Server is listed under the Selected items list, then click Submit . In the Satellite web UI, navigate to Administer > Locations and click the location to which you have assigned Capsule. Click Capsules tab and ensure that Capsule Server is listed under the Selected items list, then click Submit . Verification Optionally, you can verify if Capsule Server is correctly listed in the Satellite web UI. Select the organization from the Organization list. Select the location from the Location list. In the Satellite web UI, navigate to Hosts > All Hosts . In the Satellite web UI, navigate to Infrastructure > Capsules .
[ "hammer host-registration generate-command --activation-keys \" My_Activation_Key \"", "hammer host-registration generate-command --activation-keys \" My_Activation_Key \" --insecure true", "curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"] }}'", "curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"], \"insecure\": true }}'", "subscription-manager repos --disable \"*\"", "subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-capsule-6.15-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.15-for-rhel-8-x86_64-rpms", "dnf module enable satellite-capsule:el8", "dnf repolist enabled", "dnf install fapolicyd", "satellite-maintain packages install fapolicyd", "systemctl enable --now fapolicyd", "systemctl status fapolicyd", "dnf upgrade", "dnf install satellite-capsule", "dnf install chrony", "systemctl enable --now chronyd", "mkdir /root/ capsule_cert", "capsule-certs-generate --foreman-proxy-fqdn capsule.example.com --certs-tar /root/capsule_cert/ capsule.example.com -certs.tar", "output omitted satellite-installer --scenario capsule --certs-tar-file \"/root/capsule_cert/ capsule.example.com -certs.tar\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-foreman-base-url \"https:// satellite.example.com \" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --foreman-proxy-oauth-consumer-key \" s97QxvUAgFNAQZNGg4F9zLq2biDsxM7f \" --foreman-proxy-oauth-consumer-secret \" 6bpzAdMpRAfYaVZtaepYetomgBVQ6ehY \"", "scp /root/capsule_cert/ capsule.example.com -certs.tar root@ capsule.example.com :/root/ capsule.example.com -certs.tar", "mkdir /root/capsule_cert", "openssl genrsa -out /root/capsule_cert/capsule_cert_key.pem 4096", "[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] commonName = capsule.example.com [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [ alt_names ] DNS.1 = capsule.example.com", "[req_distinguished_name] CN = capsule.example.com countryName = My_Country_Name 1 stateOrProvinceName = My_State_Or_Province_Name 2 localityName = My_Locality_Name 3 organizationName = My_Organization_Or_Company_Name organizationalUnitName = My_Organizational_Unit_Name 4", "openssl req -new -key /root/capsule_cert/capsule_cert_key.pem \\ 1 -config /root/capsule_cert/openssl.cnf \\ 2 -out /root/capsule_cert/capsule_cert_csr.pem 3", "capsule-certs-generate --foreman-proxy-fqdn capsule.example.com --certs-tar ~/ capsule.example.com -certs.tar --server-cert /root/ capsule_cert/capsule_cert.pem \\ 1 --server-key /root/ capsule_cert/capsule_cert_key.pem \\ 2 --server-ca-cert /root/ capsule_cert/ca_cert_bundle.pem 3", "output omitted satellite-installer --scenario capsule --certs-tar-file \"/root/ capsule.example.com -certs.tar\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-foreman-base-url \"https:// satellite.example.com \" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --foreman-proxy-oauth-consumer-key \" My_OAuth_Consumer_Key \" --foreman-proxy-oauth-consumer-secret \" My_OAuth_Consumer_Secret \"", "scp ~/ capsule.example.com -certs.tar root@ capsule.example.com :/root/ capsule.example.com -certs.tar", "dnf install http:// capsule.example.com /pub/katello-ca-consumer-latest.noarch.rpm" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_capsule_server/installing-capsule-server
E.2. Metadata Contents
E.2. Metadata Contents The volume group metadata contains: Information about how and when it was created Information about the volume group The volume group information contains: Name and unique id A version number which is incremented whenever the metadata gets updated Any properties, such as: read/write or resizable Any administrative limit on the number of physical volumes and logical volumes it may contain The extent size (in units of sectors which are defined as 512 bytes) An unordered list of physical volumes making up the volume group, each with: Its UUID, used to determine the block device containing it Any properties, such as whether the physical volume is allocatable The offset to the start of the first extent within the physical volume (in sectors) The number of extents An unordered list of logical volumes, each consisting of An ordered list of logical volume segments. For each segment the metadata includes a mapping applied to an ordered list of physical volume segments or logical volume segments
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/metadata_contents
Network APIs
Network APIs OpenShift Container Platform 4.13 Reference guide for network APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_apis/index
Chapter 2. Migration Changes
Chapter 2. Migration Changes 2.1. User Profile Changes 2.1.1. User profile enabled by default The user profile feature is now enabled by default. The declarative-user-profile feature is no longer available, because the user profile is assumed to be enabled. Therefore, the User Profile Enabled switch is removed from the Realm Settings and replaced by Unmanaged attributes . When migrating from version, the behavior is as follows: For the deployments where User Profile Enabled was set to ON , Unmanaged attributes will be set to OFF after the upgrade. As a result, only user attributes supported explicitly by user profile will be allowed. For the deployments where the User Profile Enabled was set to OFF (which was also the default settings for the deployments with declarative-user-profile feature disabled, which was the default), Unmanaged attributes will be set to ON after the upgrade. As a result, the behavior should be basically the same as in versions with the user profile disabled. The Attributes tab will remain in the user details part of the Admin Console. Also users can now set arbitrary attributes through UI pages such as the registration page and update profile page as long as a particular custom theme supports it. The custom themes should work as before as well. However, consider updating your themes to use the user-profile and maybe even remove your custom themes if those themes were need to add custom attributes. See the subsequent section on themes. Also, consider toggling Unmanaged attributes to OFF or enable this switch only for administrators so that you can rely mainly on using managed attributes. See the User Profile Documentation for the details about the Unmanaged attributes . 2.1.2. Default validations Default user profile configuration comes with a set of default validations for the basic predefined fields. Those validations were not present in versions when the declarative-user-profile feature was disabled by default. If you have issues due to backward compatibility, you can change the default validators according to your needs. The default validators are as follows: The username , email , firstName and lastName attributes have a maximum length of 255 characters. These validations were indirectly present in versions as well because of the database constraint on the table USER_ENTITY for those fields with a maximum length of 255 characters. However, when using user storage providers, it might be possible before to use longer values. The username attribute has a minimum length of three characters. Username has also username-prohibited-characters and up-username-not-idn-homograph validator by default, which were not present in versions. See the Validation section of the User Profile Documentation for the details about those attributes. Note that username is not editable by default unless you have the realm switch Edit username enabled . This change means that existing users with incorrect usernames should still work and they will not be enforced to update their usernames. But new users will be enforced to use correct usernames during their registration or creation by the admin REST API. The firstName and lastName attributes have the person-name-prohibited-characters validator on them, which were not present in versions. See the Validation section of the User Profile Documentation for the details about those attributes. Note that both first name and last name are editable by default, so users, who already have such incorrect first/last name from a version will be forced to update them when updating their user profiles. 2.1.3. User attribute names with strange characters In versions, you could create a user with attribute names such as some:attribute or some/attribute . The user profile intentionally does not allow you to create attributes with such strange names in the user profile configuration. So you may need to configure Unmanaged attributes for your realm and enable unmanaged attributes for administrators (ideally) or for end users (if really needed). Although it is strongly preferred to avoid using such attribute names. 2.1.4. Verify Profile required action enabled by default The verify-profile required action is enabled by default for new realms. However, when you migrate from a version, your existing realms will have the same state of this verify-profile action as before, which usually means disabled as it was disabled by default in versions. For the details about this required action, see the User Profile Documentation . 2.1.5. Changes to Freemarker templates to render pages based on the user profile and realm In this release, the following templates were updated to make it possible to dynamically render attributes based on the user profile configuration set to a realm: login-update-profile.ftl register.ftl update-email.ftl These templates are responsible for rendering the update profile (when the Update Profile required action is enabled for a user), the registration, and the update email (when the UPDATE_EMAIL feature is enabled) pages, respectively. If you use a custom theme to change these templates, they will function as expected because only the content is updated. However, we recommend you to take a look at how to configure a declarative user profile and possibly avoid changing the built-in templates by using all the capabilities provided by this feature. Also, the templates used by the declarative-user-profile feature to render the pages for the same flows are longer necessary and removed in this release: update-user-profile.ftl register-user-profile.ftl If you were using the declarative-user-profile feature in a release with customizations to the above templates, update the login-update-profile.ftl and register.ftl accordingly. 2.1.6. New Freemarker template for the update profile page at first login through a broker In this release, the server will render the update profile page when the user is authenticating through a broker for the first time using the idp-review-user-profile.ftl template. In releases, the template used to update the profile during the first broker login flow was the login-update-profile.ftl , the same used to update the profile when users are authenticating to a realm. By using separate templates for each flow, a more clear distinction exist as to which flow a template is actually used rather than sharing a same template, and potentially introduce unexpected changes and behavior that should only affect pages for a specific flow. If you have customizations to the login-update-profile.ftl template to customize how users update their profiles when authenticating through a broker, make sure to move your changes to the new template. 2.2. Changes to themes 2.2.1. Changes to the Welcome theme The 'welcome' theme has been updated to use a new layout and now uses PatternFly 5, rather than PatternFly 3. If you are extending the theme, or providing your own, you may need to update it as follows: 2.2.1.1. Migrate from PatternFly 3 to PatternFly 5 The welcome theme was one of the more outdated themes in Red Hat build of Keycloak. It was originally based on PatternFly 3, but has now been updated to use PatternFly 5, skipping a major version in the process. If your custom theme extends the built-in theme, you will need to update it to use PatternFly 5 syntax. Consult the PatternFly 5 documentation for details. If you are still using PatternFly 3 in your own custom theme (not extending the built-in one), you can continue to use it, but PatternFly 3 support will be removed in a future release, so you should consider migrating to PatternFly 5 as soon as possible. 2.2.1.2. Automatic redirect to the Admin Console If the Admin Console is enabled, the welcome page will automatically redirect to it if the administrative user already exists. This behavior can be modified by setting the redirectToAdmin in your theme.properties file. By default, the property is set to false , unless you are extending the built-in theme, in which case, the property is set to true . 2.2.1.3. The documentationUrl and displayCommunityLinks properties have been removed. These properties were previously used for navigational elements that are now no longer present. If you are extending the built-in theme, you will need to remove these properties from your theme.properties file, as they no longer have any effect. 2.2.1.4. Assets are now loaded from 'common' resources Images such as the background, logo and favicon are now loaded from the 'common' resources, rather than the theme resources. This change means that if you are extending the built-in theme, and are overwriting these images, you will need to move them to the 'common' resources of your theme, and update your theme.properties file to include the new paths: # This defaults to 'common/keycloak' if not set. import=common/your-theme-name 2.2.2. Changes to the Account Console theme customization If you were previously extending the now deprecated version 2 of the Account Console theme, you will need to update your theme to use the new version 3 of the Account Console theme. The new version of the Account Console theme comes with some changes in regards to how it is customized. To start with a clean slate, you can follow the new customization quickstart . To move your custom theme start by changing the parent to the new theme: # Before parent=keycloak.v2 # After parent=keycloak.v3 If you have any custom React components, you will import React directly, rather than using a relative path: // Before import * as React from "../../../../common/keycloak/web_modules/react.js"; // After import React from "react"; If you are using content.json to customize the theme there are some changes to the structure of the file, specifically: The content property has been renamed to children . The id , icon and componentName properties have been removed, as modulePath provides the same functionality. 2.2.3. Language files for themes default to UTF-8 encoding This release now follows the standard mechanisms of Java and later, which assumes resource bundle files to be encoded in UTF-8. versions of Keycloak supported specifying the encoding in the first line with a comment like # encoding: UTF-8 , which is no longer supported and is ignored. Message properties files for themes are now read in UTF-8 encoding, with an automatic fallback to ISO-8859-1 encoding. If you are using a different encoding, convert the files to UTF-8. 2.3. Operator changes 2.3.1. Operator Referenced Resource Polling Secrets and ConfigMaps referenced via the Keycloak CR will now be polled for changes, rather than watched by the API server. This polling may take one minute for changes to be detected. This was done to avoid requiring label manipulation on those resources. After upgrading if any Secret still has the operator.keycloak.org/component label, it may be removed or ignored. 2.3.2. Operator Customization Property Keys The property keys used by the operator for advanced configuration have changed from operator.keycloak to kc.operator.keycloak . 2.3.3. Operator -secrets-store Secret Older versions of the operator created a Secret to track watched Secrets. Newer versions of the operator no longer use the -secrets-store Secret, so it may be deleted. 2.4. Keycloak CR changes 2.4.1. Keycloak CR resources options When no resources options are specified in the Keycloak CR and KeycloakRealmImport CR, default values are used. The default requests memory for Keycloak deployment and the realm import Job is set to 1700MiB , and the limits memory is set to 2GiB . 2.4.2. Default Keycloak CR Hostname When running on OpenShift, with ingress enabled, and with the spec.ingress.classname set to openshift-default, you may leave the spec.hostname.hostname unpopulated in the Keycloak CR. The operator will assign a default hostname to the stored version of the CR similar to what would be created by an OpenShift Route without an explicit host - that is ingress-namespace.appsDomain If the appsDomain changes, or should you need a different hostname for any reason, then update the Keycloak CR. 2.5. Features Changes It is no longer allowed to have the same feature in both the --features and --features-disabled list. The feature should appear in only one list. The usage of unversioned feature names, such as docker , in the --features list will allow for the most supported / latest feature version to be enabled for you. If you need more predictable behavior across releases, reference the particular version you want instead, such as docker:v1 . 2.6. Miscellaneous changes 2.6.1. Removing custom user attribute indexes When searching for users by user attribute, Red Hat build of Keycloak no longer searches for user attribute names forcing lower case comparisons. This change means that the native index for Red Hat build of Keycloak on the user attribute table will now be used when searching. If you have created your own index based on lower(name) to speed up searches, you can now remove it. 2.6.2. Changes to the user representation in both Admin API and Account contexts Both org.keycloak.representations.idm.UserRepresentation and org.keycloak.representations.account.UserRepresentation representation classes have changed so that the root user attributes (such as username , email , firstName , lastName , and locale ) have a consistent representation when fetching or sending the representation payload to the Admin and Account APIs, respectively. The username , email , firstName , lastName , and locale attributes were moved to a new org.keycloak.representations.idm.AbstractUserRepresentation base class. Also the getAttributes method is targeted for representing only custom attributes, so you should not expect any root attribute in the map returned by this method. This method is mainly targeted for clients when updating or fetching any custom attribute for a given user. In order to resolve all the attributes including the root attributes, a new getRawAttributes method was added so that the resulting map also includes the root attributes. However, this method is not available from the representation payload and it is targeted to be used by the server when managing user profiles. 2.6.3. https-client-auth is a build time option Option https-client-auth had been treated as a run time option, however this is not supported by Quarkus. The option needs to be handled at build time instead. 2.6.4. Sequential loading of offline sessions and remote sessions Starting with this release, the first member of a Red Hat build of Keycloak cluster will load remote sessions sequentially instead of in parallel. If offline session preloading is enabled, those will be loaded sequentially as well. The code led to high resource-consumption across the cluster at startup and was challenging to analyze in production environments and could lead to complex failure scenarios if a node was restarted during loading. Therefore, it was changed to sequential session loading. For offline sessions, the default in this and versions of Red Hat build of Keycloak is to load those sessions on demand, which scales better with a lot of offline sessions than the attempt to preload them in parallel. Setups that use this default setup are not affected by the change of the loading strategy for offline sessions. Setups that have offline session preloading enabled should migrate to a setup where offline-session preloading is disabled. 2.6.5. Infinispan metrics use labels for cache manager and cache names When enabling metrics for Red Hat build of Keycloak's embedded caches, the metrics now use labels for the cache manager and the cache names. Old metric example without labels New metric example with labels To revert the change for an installation, use a custom Infinispan XML configuration and change the configuration as follows: 2.6.6. User attribute value length extension As of this release, Red Hat build of Keycloak supports storing and searching by user attribute values longer than 255 characters, which was previously a limitation. In setups where users are allowed to update attributes, for example, via the account console, prevent denial of service attacks by adding validations. Ensure that no unmanaged attributes are allowed and all editable attributes have a validation that limits the input length. For unmanaged attributes, the maximum length is 2048 characters. For managed attributes, the default maximum length is 2048 characters. Administrator can change this by adding a validator of type length . Warning Red Hat build of Keycloak caches user-related objects in its internal caches. Using longer attributes increases the memory that is consumed by the cache. Therefore, limiting the size of the length attributes is recommended. Consider storing large objects outside Red Hat build of Keycloak and reference them by ID or URL. This change adds new indexes on the tables USER_ATTRIBUTE and FED_USER_ATTRIBUTE . If those tables contain more than 300000 entries, Red Hat build of Keycloak will skip the index creation by default during the automatic schema migration and instead log the SQL statement on the console during migration to be applied manually after Red Hat build of Keycloak's startup. See the Upgrading Guide for details on how to configure a different limit. 2.6.6.1. Additional migration steps for LDAP This is for installations that match all the following criteria: User attributes in the LDAP directory are larger than 2048 characters or binary attributes that are larger than 1500 bytes. The attributes are changed by admins or users via the admin console, the APIs or the account console. To be able to enable changing those attributes via UI and REST APIs, perform the following steps: Declare the attributes identified above as managed attributes in the user profile of the realm. Define a length validator for each attribute added in the step specifying the desired minimum and maximum length of the attribute value. For binary values, add 33 percent to the expected binary length to count in the overhead for Red Hat build of Keycloak's internal base64 encoding of binary values. 2.6.6.2. Additional migration steps for custom user storage providers This is for installations that match all the following criteria: Running MariaDB or MySQL as a database for Red Hat build of Keycloak. Entries in table FED_USER_ATTRIBUTE with contents in the VALUE column that are larger than 2048 characters. This table is used for custom user providers which have federation enabled. The long attributes are changed by admins or users via the admin console or the account console. To be able to enable changing those attributes via UI and REST APIs, perform the following steps: Declare the attributes identified above as managed attributes in the user profile of the realm. Define a length validator for each attribute added in the step specifying the desired minimum and maximum length of the attribute value. 2.6.7. The Admin send-verify-email API now uses the same email verification template In this release, the API will use the email-verification.ftl template instead of executeActions.ftl . Before upgrading After upgrading If you have customized the executeActions.ftl template to modify how users verify their email using this API, ensure that you transfer your modifications to the new template. A new parameter called lifespan will be introduced to allow overriding of the default lifespan value (12 hours). If you prefer the behavior, use the execute-actions-email API as follows. 2.6.8. Changes to Password Hashing In this release, we adapted the password hashing defaults to match the OWASP recommendations for Password Storage . As part of this change, the default password hashing provider has changed from pbkdf2-sha256 to pbkdf2-sha512 . Also, the number of default hash iterations for pbkdf2 based password hashing algorithms changed as follows: Provider ID Algorithm Old Iterations New Iterations pbkdf2 PBKDF2WithHmacSHA1 20.000 1.300.000 pbkdf2-sha256 PBKDF2WithHmacSHA256 27.500 600.000 pbkdf2-sha512 PBKDF2WithHmacSHA512 30.000 210.000 If a realm does not explicitly configure a password policy with hashAlgorithm and hashIterations , then the new configuration will take effect on the password based login, or when a user password is created or updated. 2.6.8.1. Performance of new password hashing configuration Tests on a machine with an Intel i9-8950HK CPU (12) @ 4.800GHz yielded the following ⌀ time differences for hashing 1000 passwords (averages from 3 runs). Note that the average duration for the PBKDF2WithHmacSHA1 was computed with a lower number of passwords due to the long runtime. Provider ID Algorithm Old duration New duration Difference pbkdf2 PBKDF2WithHmacSHA1 122ms 3.114ms +2.992ms pbkdf2-sha256 PBKDF2WithHmacSHA256 20ms 451ms +431ms pbkdf2-sha512 PBKDF2WithHmacSHA512 33ms 224ms +191ms Users of the pbkdf2 provider might need to explicitly reduce the number of hash iterations to regain acceptable performance. This can be done by configuring the hash iterations explicitly in the password policy of the realm. 2.6.8.2. Expected increased overall CPU usage and temporary increased database activity The CPU usage per password-based login in our tests increased by the factor of five, which includes both the changed password hashing and unchanged TLS connection handling. The overall CPU increase should be around the factor of two to three due to the averaging effect of Red Hat build of Keycloak's other activities like refreshing access tokens and client credential grants. Still, this depends on the unique workload of an installation. After the upgrade, during a password-based login, the user's passwords will be re-hashed with the new hash algorithm and hash iterations as a one-off activity and updated in the database. As this clears the user from Red Hat build of Keycloak's internal cache, you will also see an increased read activity on the database level. This increased database activity will decrease over time as more and more user's passwords have been re-hashed. 2.6.8.3. How to keep using the old pbkdf2-sha256 password hashing? To keep the old password hashing for a realm, specify hashAlgorithm and hashIterations explicitly in the realm password policy. Hashing Algorithm: pbkdf2-sha256 Hashing Iterations: 27500 2.6.9. Renaming JPA provider configuration options for migration After removal of the Map Store the following configuration options were renamed: spi-connections-jpa-legacy-initialize-empty to spi-connections-jpa-quarkus-initialize-empty spi-connections-jpa-legacy-migration-export to spi-connections-jpa-quarkus-migration-export spi-connections-jpa-legacy-migration-strategy to spi-connections-jpa-quarkus-migration-strategy 2.6.10. Temporary lockout log replaced with event There is now a new event USER_DISABLED_BY_TEMPORARY_LOCKOUT when a user is temporarily locked out by the brute force protector. The log with ID KC-SERVICES0053 has been removed as the new event offers the information in a structured form. As it is a success event, the new event is logged by default at the DEBUG level. Use the setting spi-events-listener-jboss-logging-success-level as described in the Event listener chapter in the Server Administration Guide to change the log level of all success events. To trigger custom actions or custom log entries, write a custom event listener as described in the Event Listener SPI in the Server Developer Guide . 2.6.11. Keycloak JS imports might need to be updated If you are loading Keycloak JS directly from the Red Hat build of Keycloak server, this section can be safely ignored. If you are loading Keycloak JS from the NPM package and are using a bundler like Webpack, Vite, and so on, you might need to make some changes to your code. The Keycloak JS package now uses the exports field in the package.json file. This means that you might have to change your imports: // Before import Keycloak from 'keycloak-js/dist/keycloak.js'; import AuthZ from 'keycloak-js/dist/keycloak-authz.js'; // After import Keycloak from 'keycloak-js'; import AuthZ from 'keycloak-js/authz'; 2.6.12. Internal algorithm changed from HS256 to HS512 The algorithm that Red Hat build of Keycloak uses to sign internal tokens (a JWT which is consumed by Red Hat build of Keycloak itself, for example a refresh or action token) is being changed from HS256 to the more secure HS512 . A new key provider named hmac-generated-hs512 is now added for realms. Note that in migrated realms the old hmac-generated provider and the old HS256 key are maintained and still validate tokens issued before the upgrade. The HS256 provider can be manually deleted when no more old tokens exist following the rotating keys guidelines . 2.6.13. Different JVM memory settings when running in a container The JVM options -Xms and -Xmx were replaced by -XX:InitialRAMPercentage and -XX:MaxRAMPercentage when running in a container. Instead of the static maximum heap size settings, Red Hat build of Keycloak specifies the maximum as 70% of the total container memory. As the heap size is dynamically calculated based on the total container memory, you should always set the memory limit for the container. Warning If the memory limit is not set, the memory consumption rapidly increases as the maximum heap size grows up to 70% of the total container memory. For more details, see Specifying different memory settings . 2.6.14. Added iss parameter to OAuth 2.0/OpenID Connect Authentication Response RFC 9207 OAuth 2.0 Authorization Server Issuer Identification specification adds the parameter iss in the OAuth 2.0/OpenID Connect Authentication Response for realizing secure authorization responses. In past releases, we did not have this parameter, but now Red Hat build of Keycloak adds this parameter by default, as required by the specification. However, some OpenID Connect / OAuth2 adapters, and especially older Red Hat build of Keycloak adapters, may have issues with this new parameter. For example, the parameter will be always present in the browser URL after successful authentication to the client application. In these cases, it may be useful to disable adding the iss parameter to the authentication response. This can be done for the particular client in the Red Hat build of Keycloak Admin console, in client details in the section with OpenID Connect Compatibility Modes , described in Section 4.1, "Compatibility with older adapters" . Dedicated Exclude Issuer From Authentication Response switch exists, which can be turned on to prevent adding the iss parameter to the authentication response. 2.6.15. Wildcard characters handling JPA allows wildcards % and _ when searching, while other providers like LDAP allow only * . As * is a natural wildcard character in LDAP, it works in all places, while with JPA it only worked at the beginning and the end of the search string. Starting with this release the only wildcard character is * which work consistently across all providers in all places in the search string. All special characters in a specific provider like % and _ for JPA are escaped. For exact search, with added quotes e.g. "w*ord" , the behavior remains the same as in releases. 2.6.16. Changes to the value format of claims mapped by the realm and client role mappers Before this release, both realm ( User Realm Role ) and client ( User Client Role ) protocol mappers were mapping a stringfied JSON array when the Multivalued setting was disabled. However, the Multivalued setting indicates whether the claim should be mapped as a list or, if disabled, only a single value from the same list of values. In this release, the role and client mappers now map to a single value from the effective roles of a user when they are marked as single-valued ( Multivalued disabled). 2.6.17. Changes to password fields in Login UI In this version we want to introduce a toggle to hide/show password inputs. Affected pages: login.ftl login-password.ftl login-update-password.ftl register.ftl register-user-profile.ftl In general all <input type="password" name="password" /> are encapsulated within a div now. The input element is followed by a button which toggles the visibility of the password input. Old code example: <input type="password" id="password" name="password" autocomplete="current-password" style="display:none;"/> New code example: <div class="USD{properties.kcInputGroup!}"> <input type="password" id="password" name="password" autocomplete="current-password" style="display:none;"/> <button class="pf-c-button pf-m-control" type="button" aria-label="USD{msg('showPassword')}" aria-controls="password" data-password-toggle data-label-show="USD{msg('showPassword')}" data-label-hide="USD{msg('hidePassword')}"> <i class="fa fa-eye" aria-hidden="true"></i> </button> </div> 2.6.18. kc.sh and shell metacharacters The kc.sh no longer uses an additional shell eval on parameters and the environment variables JAVA_OPTS_APPEND and JAVA_ADD_OPENS, thus the continued use of double escaping/quoting will result in the parameter being misunderstood. For example instead of Use a single escape: This change also means you cannot invoke kc.sh using a single quoted value of all arguments. For example you can no longer use it must instead be individual arguments Similarly instead of it must instead be individual arguments The usage of individual arguments is also required in Dockerfile run commands. 2.6.19. GroupProvider changes A new method has been added to allow for searching and paging through top level groups. If you implement this interface you will need to implement the following method: Stream<GroupModel> getTopLevelGroupsStream(RealmModel realm, String search, Boolean exact, Integer firstResult, Integer maxResults) 2.6.20. GroupRepresentation changes new field subGroupCount added to inform client how many subgroups are on any given group subGroups list is now only populated on queries that request hierarchy data This field is populated from the "bottom up" so can't be relied on for getting all subgroups for a group. Use a GroupProvider or request the subgroups from GET {keycloak server}/realms/{realm}/groups/{group_id}/children 2.6.21. New endpoint for Group Admin API Endpoint GET {keycloak server}/realms/{realm}/groups/{group_id}/children added as a way to get subgroups of specific groups that support pagination 2.6.22. RESTEeasy Reactive Relying on RESTEasy Classic is not longer an option because it is not available anymore. Migration will be needed for SPI's and code that is relying on RESTEasy Classic and related packages part of org.jboss.resteasy.spi.* . 2.6.23. Partial export requires manage-realm permission The endpoint POST {keycloak server}/realms/{realm}/partial-export and the corresponding action in the admin console now require manage-realm permission for execution instead of view-realm . This endpoint exports the realm configuration into a JSON file and the new permission is more appropriate. The parameters exportGroupsAndRoles and exportClients , which include the realm groups/roles and clients in the export respectively, continue managing the same permissions ( query-groups and view-clients ). 2.6.24. Valid redirect URIs for clients are always compared with exact string matching Version 1.8.0 introduced a lower-case for the hostname and scheme when comparing a redirect URI with the specified valid redirects for a client. Unfortunately it did not fully work in all the protocols, and, for example, the host was lower-cased for http but not for https . As OAuth 2.0 Security Best Current Practice advises to compare URIs using exact string matching, Red Hat build of Keycloak will follow the recommendation and for now on valid redirects are compared with exact case even for the hostname and scheme. For realms relying on the old behavior, the valid redirect URIs for their clients should now hold separate entries for each URI that should be recognized by the server. Although it introduces more steps and verbosity when configuring clients, the new behavior enables more secure deployments as pattern-based checks are frequently the cause of security issues. Not only due to how they are implemented but also how they are configured. 2.6.25. Fix handling of Groups.getSubGroups briefRepresentation parameter Version 23.0.0 introduced a new endpoint getSubGroups ("children") on the Groups resource, where the meaning of the parameter briefRepresentation meant the retrieval of full representations of the sub groups. The meaning is now changed to return the brief representation. 2.6.26. Changes in jboss-logging event messages The jboss-logging message values are now quoted (character " by default) and sanitized to prevent any line break. There are two new options in the provider ( spi-events-listener-jboss-logging-sanitize and spi-events-listener-jboss-logging-quotes ) that allow you to customize the new behavior. For example, to avoid both sanitization and quoting, the server can be started in this manner: For more information about the options, see All provider configuration . 2.7. Deprecated and removed features 2.7.1. Truststore deprecations The spi-truststore-file-* options and the truststore related options https-trust-store-* are deprecated. Therefore, use the new default location for truststore material, conf/truststores , or specify your desired paths by using the truststore-paths option. For details, see Configuring trusted certificates for outgoing requests . The tls-hostname-verifier property should be used instead of the spi-truststore-file-hostname-verification-policy property. A collateral effect of the changes is that now the truststore provider is always configured with some certificates (at least the default Java trusted certificates are present). This new behavior can affect other parts of Red Hat build of Keycloak. For example, webauthn registration can fail if attestation conveyance was configured to Direct validation. Previously, if the truststore provider was not configured the incoming certificate was not validated. But now this validation is always performed. The registration fails with invalid cert path error as the certificate chain sent by the dongle is not trusted by Red Hat build of Keycloak. The Certificate Authorities of the authenticator need to be present in the truststore provider to correctly perform the attestation. 2.7.2. Deprecated --proxy option The --proxy option has been deprecated and will be removed in a future release. The following table explains how the deprecated option maps to supported options. Deprecated usage New usage kc.sh (no proxy option set) kc.sh kc.sh --proxy none kc.sh kc.sh --proxy edge kc.sh --proxy-headers forwarded|xforwarded --http-enabled true kc.sh --proxy passthrough kc.sh --hostname-port 80|443 (depending if HTTPS is used) kc.sh --proxy reencrypt kc.sh --proxy-headers forwarded|xforwarded Note For hardened security, the --proxy-headers option does not allow selecting both forwarded and xforwarded values at the same time (as it was the case before for --proxy edge and --proxy reencrypt ). Warning When using the proxy headers option, make sure your reverse proxy properly sets and overwrites the Forwarded or X-Forwarded-* headers respectively. To set these headers, consult the documentation for your reverse proxy. Misconfiguration will leave Red Hat build of Keycloak exposed to security vulnerabilities. You can also set the proxy headers when using the Operator: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ... proxy: headers: forwarded|xforwarded Note If the proxy.headers field is not specified, the Operator falls back to the behavior by implicitly setting proxy=passthrough by default. This results in deprecation warnings in the server log. This fallback will be removed in a future release. 2.7.3. Deprecated offline session preloading The default behavior of Red Hat build of Keycloak is to load offline sessions on demand. The old behavior to preload them at startup is now deprecated, as preloading them at startup does not scale well with a growing number of sessions, and increases Red Hat build of Keycloak memory usage. The old behavior will be removed in a future release. To re-enable old behavior while it is deprecated and not removed yet, use the feature flag and the SPI option as shown below: bin/kc.[sh|bat] start --features-enabled offline-session-preloading --spi-user-sessions-infinispan-preload-offline-sessions-from-database=true The API of UserSessionProvider deprecated the method getOfflineUserSessionByBrokerSessionId(RealmModel realm, String brokerSessionId) . Instead of this method, use getOfflineUserSessionByBrokerUserIdStream(RealmModel, String brokerUserId) to first get the sessions of a user, and then filter by the broker session ID as needed. 2.7.4. Deprecated methods from data providers and models RealmModel#getTopLevelGroupsStream() and overloaded methods are now deprecated 2.7.5. Deprecations and removals for cookies As part of refactoring cookie handling in Red Hat build of Keycloak there are some changes to how cookies are set: All cookies will now have the secure attribute set if the request is through a secure context WELCOME_STATE_CHECKER cookies now set SameSite=Strict For custom extensions there may be some changes needed: LocaleSelectorProvider.KEYCLOAK_LOCALE is deprecated as cookies are now managed through the CookieProvider HttpResponse.setWriteCookiesOnTransactionComplete has been removed HttpCookie is deprecated, please use NewCookie.Builder instead ServerCookie is deprecated, please use NewCookie.Builder instead 2.7.6. Removal of the deprecated mode for SAML encryption The compatibility mode for SAML encryption introduced in version 21 is now removed. The system property keycloak.saml.deprecated.encryption is not managed anymore by the server. The clients which still used the old signing key for encryption should update it from the new IDP configuration metadata. 2.7.7. Renaming model modules After removal of the Map Store the following modules were renamed: org.keycloak:keycloak-model-legacy-private to org.keycloak:keycloak-model-storage-private org.keycloak:keycloak-model-legacy-services to org.keycloak:keycloak-model-storage-services and org.keycloak:keycloak-model-legacy module was deprecated and will be removed in the release in favour of org.keycloak:keycloak-model-storage module. 2.7.8. Removed RegistrationProfile form action The form action RegistrationProfile (displayed in the UI of authentication flows as Profile Validation ) was removed from the codebase and also from all authentication flows. By default, it was in the built-in registration flow of every realm. The validation of user attributes as well as creation of the user including all that user's attributes is handled by RegistrationUserCreation form action and hence RegistrationProfile is not needed anymore. There is usually no further action needed in relation to this change, unless you used RegistrationProfile class in your own providers. 2.7.9. Partial update to user attributes when updating users through the Admin User API is no longer supported When updating user attributes through the Admin User API, you cannot execute partial updates when updating the user attributes, including the root attributes such as username , email , firstName , and lastName . 2.7.10. The deprecated auto-build CLI option was removed The auto-build CLI option has been marked as deprecated for a long time. In this release, it was completely removed, and it is no longer supported. When executing the start command, the server is automatically built based on the configuration. In order to prevent this behavior, set the --optimized flag. 2.7.11. Removal of the options to trim the event's details length Since this release, Keycloak supports long value for EventEntity details column. Therefore, it no longer supports options for trimming event detail length --spi-events-store-jpa-max-detail-length and --spi-events-store-jpa-max-field-length . 2.7.12. Removed namespaces from our translations We moved all translations into one file for the admin-ui, if you have made your own translations or extended the admin ui you will need to migrate them to this new format. Also if you have "overrides" in your database you'll have to remove the namespace from the keys. Some keys are the same only in different namespaces, this is most obvious to help. In these cases we have postfix the key with Help . If you want you can use this node script to help with the migration. It will take all the single files and put them into a new one and also take care of some of the mapping: import { readFileSync, writeFileSync, appendFileSync } from "node:fs"; const ns = [ "common", "common-help", "dashboard", "clients", "clients-help", "client-scopes", "client-scopes-help", "groups", "realm", "roles", "users", "users-help", "sessions", "events", "realm-settings", "realm-settings-help", "authentication", "authentication-help", "user-federation", "user-federation-help", "identity-providers", "identity-providers-help", "dynamic", ]; const map = new Map(); const dup = []; ns.forEach((n) => { const rawData = readFileSync(n + ".json"); const translation = JSON.parse(rawData); Object.entries(translation).map((e) => { const name = e[0]; const value = e[1]; if (map.has(name) && map.get(name) !== value) { if (n.includes("help")) { map.set(name + "Help", value); } else { map.set(name, value); dup.push({ name: name, value: map.get(name), dup: { ns: n, value: value }, }); } } else { map.set(name, value); } }); }); writeFileSync( "translation.json", JSON.stringify(Object.fromEntries(map.entries()), undefined, 2), ); const mapping = [ ["common:clientScope", "clientScopeType"], ["identity-providers:createSuccess", "createIdentityProviderSuccess"], ["identity-providers:createError", "createIdentityProviderError"], ["clients:createError", "createClientError"], ["clients:createSuccess", "createClientSuccess"], ["user-federation:createSuccess", "createUserProviderSuccess"], ["user-federation:createError", "createUserProviderError"], ["authentication-help:name", "flowNameHelp"], ["authentication-help:description", "flowDescriptionHelp"], ["clientScopes:noRoles", "noRoles-clientScope"], ["clientScopes:noRolesInstructions", "noRolesInstructions-clientScope"], ["users:noRoles", "noRoles-user"], ["users:noRolesInstructions", "noRolesInstructions-user"], ["clients:noRoles", "noRoles-client"], ["clients:noRolesInstructions", "noRolesInstructions-client"], ["groups:noRoles", "noRoles-group"], ["groups:noRolesInstructions", "noRolesInstructions-group"], ["roles:noRoles", "noRoles-roles"], ["roles:noRolesInstructions", "noRolesInstructions-roles"], ["realm:realmName:", "realmNameField"], ["client-scopes:searchFor", "searchForClientScope"], ["roles:searchFor", "searchForRoles"], ["authentication:title", "titleAuthentication"], ["events:title", "titleEvents"], ["roles:title", "titleRoles"], ["users:title", "titleUsers"], ["sessions:title", "titleSessions"], ["client-scopes:deleteConfirm", "deleteConfirmClientScopes"], ["users:deleteConfirm", "deleteConfirmUsers"], ["groups:deleteConfirm_one", "deleteConfirmGroup_one"], ["groups:deleteConfirm_other", "deleteConfirmGroup_other"], ["identity-providers:deleteConfirm", "deleteConfirmIdentityProvider"], ["realm-settings:deleteConfirm", "deleteConfirmRealmSetting"], ["roles:whoWillAppearLinkText", "whoWillAppearLinkTextRoles"], ["users:whoWillAppearLinkText", "whoWillAppearLinkTextUsers"], ["roles:whoWillAppearPopoverText", "whoWillAppearPopoverTextRoles"], ["users:whoWillAppearPopoverText", "whoWillAppearPopoverTextUsers"], ["client-scopes:deletedSuccess", "deletedSuccessClientScope"], ["identity-providers:deletedSuccess", "deletedSuccessIdentityProvider"], ["realm-settings:deleteSuccess", "deletedSuccessRealmSetting"], ["client-scopes:deleteError", "deletedErrorClientScope"], ["identity-providers:deleteError", "deletedErrorIdentityProvider"], ["realm-settings:deleteError", "deletedErrorRealmSetting"], ["realm-settings:saveSuccess", "realmSaveSuccess"], ["user-federation:saveSuccess", "userProviderSaveSuccess"], ["realm-settings:saveError", "realmSaveError"], ["user-federation:saveError", "userProviderSaveError"], ["realm-settings:validateName", "validateAttributeName"], ["identity-providers:disableConfirm", "disableConfirmIdentityProvider"], ["realm-settings:disableConfirm", "disableConfirmRealm"], ["client-scopes:updateSuccess", "updateSuccessClientScope"], ["client-scopes:updateError", "updateErrorClientScope"], ["identity-providers:updateSuccess", "updateSuccessIdentityProvider"], ["identity-providers:updateError", "updateErrorIdentityProvider"], ["user-federation:orderChangeSuccess", "orderChangeSuccessUserFed"], ["user-federation:orderChangeError", "orderChangeErrorUserFed"], ["authentication-help:alias", "authenticationAliasHelp"], ["authentication-help:flowType", "authenticationFlowTypeHelp"], ["authentication:createFlow", "authenticationCreateFlowHelp"], ["client-scopes-help:rolesScope", "clientScopesRolesScope"], ["client-scopes-help:name", "scopeNameHelp"], ["client-scopes-help:description", "scopeDescriptionHelp"], ["client-scopes-help:type", "scopeTypeHelp"], ["clients-help:description", "clientDescriptionHelp"], ["clients-help:clientType", "clientsClientTypeHelp"], ["clients-help:scopes", "clientsClientScopesHelp"], ["common:clientScope", "clientScopeTypes"], ["dashboard:realmName", "realmNameTitle"], ["common:description", "description"], ]; mapping.forEach((m) => { const key = m[0].split(":"); try { const data = readFileSync(key[0] + ".json"); const translation = JSON.parse(data); const value = translation[key[1]]; if (value) { appendFileSync( "translation.json", '"' + m[1] + '": ' + JSON.stringify(value) + ',\n', ); } } catch (error) { console.error("skipping namespace key: " + key); } }); Save this into a file called transform.mjs in your public/locale/<language> folder and run it with: Note This might not do a complete transform, but very close to it.
[ "This defaults to 'common/keycloak' if not set. import=common/your-theme-name", "Before parent=keycloak.v2 After parent=keycloak.v3", "// Before import * as React from \"../../../../common/keycloak/web_modules/react.js\"; // After import React from \"react\";", "vendor_cache_manager_keycloak_cache_sessions_statistics_approximate_entries_in_memory{cache=\"sessions\",node=\"...\"}", "vendor_statistics_approximate_entries_in_memory{cache=\"sessions\",cache_manager=\"keycloak\",node=\"...\"}", "<metrics names-as-tags=\"false\" />", "PUT /admin/realms/{realm}/users/{id}/send-verify-email", "Perform the following action(s): Verify Email", "Confirm validity of e-mail address [email protected].", "PUT /admin/realms/{realm}/users/{id}/execute-actions-email [\"VERIFY_EMAIL\"]", "// Before import Keycloak from 'keycloak-js/dist/keycloak.js'; import AuthZ from 'keycloak-js/dist/keycloak-authz.js'; // After import Keycloak from 'keycloak-js'; import AuthZ from 'keycloak-js/authz';", "<input type=\"password\" id=\"password\" name=\"password\" autocomplete=\"current-password\" style=\"display:none;\"/>", "<div class=\"USD{properties.kcInputGroup!}\"> <input type=\"password\" id=\"password\" name=\"password\" autocomplete=\"current-password\" style=\"display:none;\"/> <button class=\"pf-c-button pf-m-control\" type=\"button\" aria-label=\"USD{msg('showPassword')}\" aria-controls=\"password\" data-password-toggle data-label-show=\"USD{msg('showPassword')}\" data-label-hide=\"USD{msg('hidePassword')}\"> <i class=\"fa fa-eye\" aria-hidden=\"true\"></i> </button> </div>", "bin/kc.sh start --db postgres --db-username keycloak --db-url \"\\\"jdbc:postgresql://localhost:5432/keycloak?ssl=false&connectTimeout=30\\\"\" --db-password keycloak --hostname localhost", "bin/kc.sh start --db postgres --db-username keycloak --db-url \"jdbc:postgresql://localhost:5432/keycloak?ssl=false&connectTimeout=30\" --db-password keycloak --hostname localhost", "bin/kc.sh \"start --help\"", "bin/kc.sh start --help", "bin/kc.sh build \"--db postgres\"", "bin/kc.sh build --db postgres", "Stream<GroupModel> getTopLevelGroupsStream(RealmModel realm, String search, Boolean exact, Integer firstResult, Integer maxResults)", "./kc.sh start --spi-events-listener-jboss-logging-sanitize=false --spi-events-listener-jboss-logging-quotes=none", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: proxy: headers: forwarded|xforwarded", "bin/kc.[sh|bat] start --features-enabled offline-session-preloading --spi-user-sessions-infinispan-preload-offline-sessions-from-database=true", "import { readFileSync, writeFileSync, appendFileSync } from \"node:fs\"; const ns = [ \"common\", \"common-help\", \"dashboard\", \"clients\", \"clients-help\", \"client-scopes\", \"client-scopes-help\", \"groups\", \"realm\", \"roles\", \"users\", \"users-help\", \"sessions\", \"events\", \"realm-settings\", \"realm-settings-help\", \"authentication\", \"authentication-help\", \"user-federation\", \"user-federation-help\", \"identity-providers\", \"identity-providers-help\", \"dynamic\", ]; const map = new Map(); const dup = []; ns.forEach((n) => { const rawData = readFileSync(n + \".json\"); const translation = JSON.parse(rawData); Object.entries(translation).map((e) => { const name = e[0]; const value = e[1]; if (map.has(name) && map.get(name) !== value) { if (n.includes(\"help\")) { map.set(name + \"Help\", value); } else { map.set(name, value); dup.push({ name: name, value: map.get(name), dup: { ns: n, value: value }, }); } } else { map.set(name, value); } }); }); writeFileSync( \"translation.json\", JSON.stringify(Object.fromEntries(map.entries()), undefined, 2), ); const mapping = [ [\"common:clientScope\", \"clientScopeType\"], [\"identity-providers:createSuccess\", \"createIdentityProviderSuccess\"], [\"identity-providers:createError\", \"createIdentityProviderError\"], [\"clients:createError\", \"createClientError\"], [\"clients:createSuccess\", \"createClientSuccess\"], [\"user-federation:createSuccess\", \"createUserProviderSuccess\"], [\"user-federation:createError\", \"createUserProviderError\"], [\"authentication-help:name\", \"flowNameHelp\"], [\"authentication-help:description\", \"flowDescriptionHelp\"], [\"clientScopes:noRoles\", \"noRoles-clientScope\"], [\"clientScopes:noRolesInstructions\", \"noRolesInstructions-clientScope\"], [\"users:noRoles\", \"noRoles-user\"], [\"users:noRolesInstructions\", \"noRolesInstructions-user\"], [\"clients:noRoles\", \"noRoles-client\"], [\"clients:noRolesInstructions\", \"noRolesInstructions-client\"], [\"groups:noRoles\", \"noRoles-group\"], [\"groups:noRolesInstructions\", \"noRolesInstructions-group\"], [\"roles:noRoles\", \"noRoles-roles\"], [\"roles:noRolesInstructions\", \"noRolesInstructions-roles\"], [\"realm:realmName:\", \"realmNameField\"], [\"client-scopes:searchFor\", \"searchForClientScope\"], [\"roles:searchFor\", \"searchForRoles\"], [\"authentication:title\", \"titleAuthentication\"], [\"events:title\", \"titleEvents\"], [\"roles:title\", \"titleRoles\"], [\"users:title\", \"titleUsers\"], [\"sessions:title\", \"titleSessions\"], [\"client-scopes:deleteConfirm\", \"deleteConfirmClientScopes\"], [\"users:deleteConfirm\", \"deleteConfirmUsers\"], [\"groups:deleteConfirm_one\", \"deleteConfirmGroup_one\"], [\"groups:deleteConfirm_other\", \"deleteConfirmGroup_other\"], [\"identity-providers:deleteConfirm\", \"deleteConfirmIdentityProvider\"], [\"realm-settings:deleteConfirm\", \"deleteConfirmRealmSetting\"], [\"roles:whoWillAppearLinkText\", \"whoWillAppearLinkTextRoles\"], [\"users:whoWillAppearLinkText\", \"whoWillAppearLinkTextUsers\"], [\"roles:whoWillAppearPopoverText\", \"whoWillAppearPopoverTextRoles\"], [\"users:whoWillAppearPopoverText\", \"whoWillAppearPopoverTextUsers\"], [\"client-scopes:deletedSuccess\", \"deletedSuccessClientScope\"], [\"identity-providers:deletedSuccess\", \"deletedSuccessIdentityProvider\"], [\"realm-settings:deleteSuccess\", \"deletedSuccessRealmSetting\"], [\"client-scopes:deleteError\", \"deletedErrorClientScope\"], [\"identity-providers:deleteError\", \"deletedErrorIdentityProvider\"], [\"realm-settings:deleteError\", \"deletedErrorRealmSetting\"], [\"realm-settings:saveSuccess\", \"realmSaveSuccess\"], [\"user-federation:saveSuccess\", \"userProviderSaveSuccess\"], [\"realm-settings:saveError\", \"realmSaveError\"], [\"user-federation:saveError\", \"userProviderSaveError\"], [\"realm-settings:validateName\", \"validateAttributeName\"], [\"identity-providers:disableConfirm\", \"disableConfirmIdentityProvider\"], [\"realm-settings:disableConfirm\", \"disableConfirmRealm\"], [\"client-scopes:updateSuccess\", \"updateSuccessClientScope\"], [\"client-scopes:updateError\", \"updateErrorClientScope\"], [\"identity-providers:updateSuccess\", \"updateSuccessIdentityProvider\"], [\"identity-providers:updateError\", \"updateErrorIdentityProvider\"], [\"user-federation:orderChangeSuccess\", \"orderChangeSuccessUserFed\"], [\"user-federation:orderChangeError\", \"orderChangeErrorUserFed\"], [\"authentication-help:alias\", \"authenticationAliasHelp\"], [\"authentication-help:flowType\", \"authenticationFlowTypeHelp\"], [\"authentication:createFlow\", \"authenticationCreateFlowHelp\"], [\"client-scopes-help:rolesScope\", \"clientScopesRolesScope\"], [\"client-scopes-help:name\", \"scopeNameHelp\"], [\"client-scopes-help:description\", \"scopeDescriptionHelp\"], [\"client-scopes-help:type\", \"scopeTypeHelp\"], [\"clients-help:description\", \"clientDescriptionHelp\"], [\"clients-help:clientType\", \"clientsClientTypeHelp\"], [\"clients-help:scopes\", \"clientsClientScopesHelp\"], [\"common:clientScope\", \"clientScopeTypes\"], [\"dashboard:realmName\", \"realmNameTitle\"], [\"common:description\", \"description\"], ]; mapping.forEach((m) => { const key = m[0].split(\":\"); try { const data = readFileSync(key[0] + \".json\"); const translation = JSON.parse(data); const value = translation[key[1]]; if (value) { appendFileSync( \"translation.json\", '\"' + m[1] + '\": ' + JSON.stringify(value) + ',\\n', ); } } catch (error) { console.error(\"skipping namespace key: \" + key); } });", "node ./transform.mjs" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/upgrading_guide/migration-changes
Chapter 14. Working with containers
Chapter 14. Working with containers 14.1. Introduction to containers Containers include all the necessary components like libraries, frameworks, and other additional dependencies that are isolated and self-sufficient within their own executable. A Red Hat container certification ensures supportability of both the operating system and the application layers. It provides enhanced security by vulnerability scanning and health grading of the Red Hat components, and lifecycle commitment whenever the Red Hat or partner components are updated. However, containers running in privileged mode, or privileged containers, stretch their boundaries and interact with their host to run commands or access the host's resources. For example, a container that reads or writes to a filesystem mounted on the host must run in privileged mode. Privileged containers might create a security risk. A compromised privileged container might also compromise its host and the integrity of the environment as a whole. Moreover, privileged containers are susceptible to incompatibilities with the host as operating system interfaces such as commands, libraries, ABI, and APIs might change or deprecate over time. This can put privileged containers at risk of interacting with the host in an unsupported way. You must ensure that your containers can run on any supported hosts in the customer's environment. Red Hat encourages you to adopt a continuous integration model that lets you test your containers with public betas or earlier versions of Red Hat products to maximize compatibility. Partner Validation - Select this type of certification, if you want to validate your product using your own criteria and test suite on Red Hat platforms. This partner validation allows you to publish your software offerings on the Red Hat Ecosystem Catalog more quickly. However, validated workloads may not incorporate all of Red Hat integration requirements and best practices. We encourage you to continue your efforts toward Red Hat certification. Certified - Select this type of certification, if you want your product to undergo thorough testing by using Red Hat's test suite, and benefit from collaborative support. Your products will meet your standards and Red Hat's criteria, including interoperability, lifecycle management, security, and support requirements. Products that meet the requirements and complete the certification workflow get listed on the Red Hat Ecosystem Catalog. Partners will receive a logo to promote their product certification. 14.2. Container certification workflow Note Red Hat recommends that you are a Red Hat Certified Engineer or hold equivalent experience before starting the certification process. Task Summary The certification workflow includes the three primary stages- Section 14.2.1, "Certification on-boarding" Section 14.2.2, "Certification testing for containerized applications" Section 14.2.3, "Publishing the certified product listing on the Red Hat Ecosystem Catalog" 14.2.1. Certification on-boarding Perform the steps outlined for certification onboarding: Join the Red Hat Connect for Technology Partner Program. Agree to the program terms and conditions. Create your product listing by selecting your desired product category. You can select from the available product categories: Containerized Application Standalone Application OpenStack Infrastructure Complete your company profile. Add components to the product listing. Certify components for your product listing. Additional resources For detailed instructions about creating your first product listing, see Creating a product . 14.2.2. Certification testing for containerized applications Follow these high-level steps to run a certification test: Build your container image. Upload your container image to your chosen registry. You can choose any registry of your choice. Note You can perform Red Hat Container certification by using a custom container registry. This enables you to provide an access token to the registry, which thereby helps to verify the availability of the container images for users. Also, it ensures that the container image can undergo scanning by the security scanner and can be published on the Red Hat Ecosystem Catalog. Custom registries employ diverse authentication methods, and the Red Hat Software certification program supports the following authentication methods along with the standard OCI registry API: Bearer Authentication OAuth2 Basic Authentication For more details about the authentication methods, see Supported auth methods . Download the Preflight certification utility . Run Preflight with your container image. Submit results on Red Hat Partner Connect . Additional resources For detailed instructions about certification testing, see Running the certification test suite . 14.2.3. Publishing the certified product listing on the Red Hat Ecosystem Catalog The Partner Validated or Certified container must be added to your product's Product Listing page on the Red Hat Partner Connect portal. Once published, your product listing is displayed on the Red Hat Ecosystem Catalog , by using the product information that you provide. You can publish both the Partner Validated and Certified application on the Red Hat Ecosystem Catalog with the respective labels. Additional resources For more information about containers, see: Containers and UBI Technical Track Choosing the right container image Everything you need to know about Red Hat Universal Base Image 14.3. Testing multi-arch container certification using preflight Follow these steps to perform a multi-arch container certification test: Procedure Build your multi-arch container images. See Building and pushing multi-arch container images using Podman for more information. Upload your container images to your chosen registry. You can select any OCI registry of your choice. Note You can perform Red Hat Container certification by using a custom container registry. This enables you to provide an access token to the registry, which thereby helps to verify the availability of the container images for users. Also, it ensures that the container image can be scanned by the security scanner and published on the Red Hat Ecosystem Catalog . Custom registries employ diverse authentication methods, and the Red Hat Software certification program supports the following authentication methods along with the standard OCI registry API: Bearer Authentication OAuth2 Basic Authentication For more details about the authentication methods, see Supported auth methods . Download the Preflight certification utility . Ensure that you have the latest version to benefit from any updates or improvements. Run preflight with your multi-arch container image. Preflight will automatically run and submit results for all architectures if the supplied image is a manifest list. Review and address the preflight certification results. Submit results on Red Hat Partner Connect . 14.3.1. Building and pushing multi-arch container images using Podman Follow the instructions to build and push multi-arch images using Podman: Prerequisites Podman is installed on your system. You have a Dockerfile that defines the image you want to build for multiple architectures. You have a Quay.io account or any other container registry account. Procedure Prepare Your Dockerfile. Build and push the multi-arch container Images. Check the podman-manifest documentation for instructions on building and pushing the multi-arch container images.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/assembly_working-with-containers_configuring-the-system-and-running-tests-by-using-cockpit-for-non-containerized-application
Chapter 2. Using Maven with JBoss EAP
Chapter 2. Using Maven with JBoss EAP 2.1. Learn about Maven 2.1.1. About the Maven Repository Apache Maven is a distributed build automation tool used in Java application development to create, manage, and build software projects. Maven uses standard configuration files called Project Object Model, or POM, files to define projects and manage the build process. POMs describe the module and component dependencies, build order, and targets for the resulting project packaging and output using an XML file. This ensures that the project is built in a correct and uniform manner. Maven achieves this by using a repository. A Maven repository stores Java libraries, plug-ins, and other build artifacts. The default public repository is the Maven 2 Central Repository , but repositories can be private and internal within a company with a goal to share common artifacts among development teams. Repositories are also available from third-parties. JBoss EAP includes a Maven repository that contains many of the requirements that Jakarta EE developers typically use to build applications on JBoss EAP. To configure your project to use this repository, see Configure the JBoss EAP Maven Repository . For more information about Maven, see Welcome to Apache Maven . For more information about Maven repositories, see Apache Maven Project - Introduction to Repositories . 2.1.2. About the Maven POM File The Project Object Model, or POM, file is a configuration file used by Maven to build projects. It is an XML file that contains information about the project and how to build it, including the location of the source, test, and target directories, the project dependencies, plug-in repositories, and goals it can execute. It can also include additional details about the project including the version, description, developers, mailing list, license, and more. A pom.xml file requires some configuration options and will default all others. The schema for the pom.xml file can be found at http://maven.apache.org/maven-v4_0_0.xsd . For more information about POM files, see the Apache Maven Project POM Reference . Minimum Requirements of a Maven POM File The minimum requirements of a pom.xml file are as follows: project root modelVersion groupId - the ID of the project's group artifactId - the ID of the artifact (project) version - the version of the artifact under the specified group Example: Basic pom.xml File A basic pom.xml file might look like this: 2.1.3. About the Maven Settings File The Maven settings.xml file contains user-specific configuration information for Maven. It contains information that must not be distributed with the pom.xml file, such as developer identity, proxy information, local repository location, and other settings specific to a user. There are two locations where the settings.xml can be found: In the Maven installation: The settings file can be found in the USDM2_HOME/conf/ directory. These settings are referred to as global settings. The default Maven settings file is a template that can be copied and used as a starting point for the user settings file. In the user's installation: The settings file can be found in the USD{user.home}/.m2/ directory. If both the Maven and user settings.xml files exist, the contents are merged. Where there are overlaps, the user's settings.xml file takes precedence. Example: Maven Settings File <?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <profiles> <!-- Configure the JBoss EAP Maven repository --> <profile> <id>jboss-eap-maven-repository</id> <repositories> <repository> <id>jboss-eap</id> <url>file:///path/to/repo/jboss-eap-7.4.0-maven-repository/maven-repository</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-eap-maven-plugin-repository</id> <url>file:///path/to/repo/jboss-eap-7.4.0-maven-repository/maven-repository</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <!-- Optionally, make the repository active by default --> <activeProfile>jboss-eap-maven-repository</activeProfile> </activeProfiles> </settings> The schema for the settings.xml file can be found at http://maven.apache.org/xsd/settings-1.0.0.xsd . 2.1.4. About Maven Repository Managers A repository manager is a tool that allows you to easily manage Maven repositories. Repository managers are useful in multiple ways: They provide the ability to configure proxies between your organization and remote Maven repositories. This provides a number of benefits, including faster and more efficient deployments and a better level of control over what is downloaded by Maven. They provide deployment destinations for your own generated artifacts, allowing collaboration between different development teams across an organization. For more information about Maven repository managers, see Best Practice - Using a Repository Manager . Commonly used Maven repository managers Sonatype Nexus See Sonatype Nexus documentation for more information about Nexus. Artifactory See JFrog Artifactory documentation for more information about Artifactory. Apache Archiva See Apache Archiva: The Build Artifact Repository Manager for more information about Apache Archiva. Note In an enterprise environment, where a repository manager is usually used, Maven should query all artifacts for all projects using this manager. Because Maven uses all declared repositories to find missing artifacts, if it can not find what it is looking for, it will try and look for it in the repository central (defined in the built-in parent POM). To override this central location, you can add a definition with central so that the default repository central is now your repository manager as well. This works well for established projects, but for clean or 'new' projects it causes a problem as it creates a cyclic dependency. 2.2. Install Maven and the JBoss EAP Maven Repository 2.2.1. Download and Install Maven Follow these steps to download and install Maven: If you are using Red Hat CodeReady Studio to build and deploy your applications, skip this procedure. Maven is distributed with Red Hat CodeReady Studio. If you are using the Maven command line to build and deploy your applications to JBoss EAP, you must download and install Maven. Go to Apache Maven Project - Download Maven and download the latest distribution for your operating system. See the Maven documentation for information on how to download and install Apache Maven for your operating system. 2.2.2. Download the JBoss EAP Maven Repository You can use either method to download the JBoss EAP Maven repository: Download the JBoss EAP Maven repository ZIP file . Download the JBoss EAP Maven repository using the Offliner application . 2.2.2.1. Download the JBoss EAP Maven Repository ZIP File Follow these steps to download the JBoss EAP Maven repository. Log in to the JBoss EAP download page on the Red Hat Customer Portal. Select 7.4 in the Version drop-down menu. Find the Red Hat JBoss Enterprise Application Platform 7.4 Maven Repository entry in the list and click Download to download a ZIP file containing the repository. Save the ZIP file to the desired directory. Extract the ZIP file. 2.2.2.2. Download the JBoss EAP Maven Repository with the Offliner Application The Offliner application is available as an alternative option to download the Maven artifacts for developing JBoss EAP applications using the Red Hat Maven repository. Important The process of downloading the JBoss EAP Maven repository using the Offliner application is provided as Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Log in to the JBoss EAP download page on the Red Hat Customer Portal. Select 7.4 in the Version drop-down menu. Find the Red Hat JBoss Enterprise Application Platform 7.4 Maven Repository Offliner Content List entry in the list and click Download . Save the text file to the desired directory. Note This file does not contain license information. The artifacts downloaded by the Offliner application have the same licenses as specified in the Maven repository ZIP file that is distributed with JBoss EAP. Download the Offliner application from the Maven Central Repository. Run the Offliner application using the following command: The artifacts from the JBoss EAP Maven repository are downloaded into the DOWNLOAD_FOLDER directory. See the Offliner documentation for more information on running the Offliner application. Note The generated JBoss EAP Maven repository will have the same content that is currently available in the JBoss EAP Maven repository ZIP file. It will not contain artifacts available in Maven Central repository. 2.2.3. Install the JBoss EAP Maven Repository You can use the JBoss EAP Maven repository available online, or download and install it locally using any one of the three listed methods: Install the JBoss EAP Maven repository on your local file system. For detailed instructions, see Install the JBoss EAP Maven Repository Locally . Install the JBoss EAP Maven repository on the Apache Web Server. For more information, see Install the JBoss EAP Maven Repository for Use with Apache httpd . Install the JBoss EAP Maven repository using the Nexus Maven Repository Manager. For more information, see Repository Management Using Nexus Maven Repository Manager . 2.2.3.1. Install the JBoss EAP Maven Repository Locally Use this option to install the JBoss EAP Maven Repository to the local file system. This is easy to configure and allows you to get up and running quickly on your local machine. Important This method can help you become familiar with using Maven for development but is not recommended for team production environments. Before downloading a new Maven repository, remove the cached repository/ subdirectory located under the .m2/ directory before attempting to use it. To install the JBoss EAP Maven repository to the local file system: Make sure you have downloaded the JBoss EAP Maven repository ZIP file to your local file system. Unzip the file on the local file system of your choosing. This creates a new jboss-eap-7.4.0-maven-repository/ directory, which contains the Maven repository in a subdirectory named maven-repository/ . Important If you want to use an older local repository, you must configure it separately in the Maven settings.xml configuration file. Each local repository must be configured within its own <repository> tag. 2.2.3.2. Install the JBoss EAP Maven Repository for Use with Apache httpd Installing the JBoss EAP Maven Repository for use with Apache httpd is a good option for multi-user and cross-team development environments because any developer that can access the web server can also access the Maven repository. Before installing the JBoss EAP Maven Repository, you must first configure Apache httpd. See Apache HTTP Server Project documentation for instructions. Ensure that you have the JBoss EAP Maven repository ZIP file downloaded to your local file system. Unzip the file in a directory that is web accessible on the Apache server. Configure Apache to allow read access and directory browsing in the created directory. This configuration allows a multi-user environment to access the Maven repository on Apache httpd. 2.3. Use the Maven Repository 2.3.1. Configure the JBoss EAP Maven Repository Overview There are two approaches to direct Maven to use the JBoss EAP Maven Repository in your project: You can configure the repositories in the Maven global or user settings. You can configure the repositories in the project's POM file. Configure the JBoss EAP Maven Repository Using the Maven Settings This is the recommended approach. Maven settings used with a repository manager or repository on a shared server provide better control and manageability of projects. Settings also provide the ability to use an alternative mirror to redirect all lookup requests for a specific repository to your repository manager without changing the project files. For more information about mirrors, see http://maven.apache.org/guides/mini/guide-mirror-settings.html . This method of configuration applies across all Maven projects, as long as the project POM file does not contain repository configuration. This section describes how to configure the Maven settings. You can configure the Maven install global settings or the user's install settings. Configure the Maven Settings File Locate the Maven settings.xml file for your operating system. It is usually located in the USD{user.home}/.m2/ directory. For Linux or Mac, this is ~/.m2/ For Windows, this is \Documents and Settings\.m2\ or \Users\.m2\ If you do not find a settings.xml file, copy the settings.xml file from the USD{user.home}/.m2/conf/ directory into the USD{user.home}/.m2/ directory. Copy the following XML into the <profiles> element of the settings.xml file. Determine the URL of the JBoss EAP repository and replace JBOSS_EAP_REPOSITORY_URL with it. <!-- Configure the JBoss Enterprise Maven repository --> <profile> <id>jboss-enterprise-maven-repository</id> <repositories> <repository> <id>jboss-enterprise-maven-repository</id> <url> JBOSS_EAP_REPOSITORY_URL </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-enterprise-maven-repository</id> <url> JBOSS_EAP_REPOSITORY_URL </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> The following is an example configuration that accesses the online JBoss EAP Maven repository. <!-- Configure the JBoss Enterprise Maven repository --> <profile> <id>jboss-enterprise-maven-repository</id> <repositories> <repository> <id>jboss-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Copy the following XML into the <activeProfiles> element of the settings.xml file. <activeProfile>jboss-enterprise-maven-repository</activeProfile> If you modify the settings.xml file while Red Hat CodeReady Studio is running, you must refresh the user settings. From the menu, choose Window Preferences . In the Preferences window, expand Maven and choose User Settings . Click the Update Settings button to refresh the Maven user settings in Red Hat CodeReady Studio. Important If your Maven repository contains outdated artifacts, you might encounter one of the following Maven error messages when you build or deploy your project: Missing artifact ARTIFACT_NAME [ERROR] Failed to execute goal on project PROJECT_NAME ; Could not resolve dependencies for PROJECT_NAME To resolve the issue, delete the cached version of your local repository to force a download of the latest Maven artifacts. The cached repository is located here: USD{user.home}/.m2/repository/ Configure the JBoss EAP Maven Repository Using the Project POM Warning You should avoid this method of configuration as it overrides the global and user Maven settings for the configured project. You must plan carefully if you decide to configure repositories using project POM file. Transitively included POMs are an issue with this type of configuration since Maven has to query the external repositories for missing artifacts and this slows the build process. It can also cause you to lose control over where your artifacts are coming from. Note The URL of the repository will depend on where the repository is located: on the file system, or web server. For information on how to install the repository, see: Install the JBoss EAP Maven Repository . The following are examples for each of the installation options: File System file:///path/to/repo/jboss-eap-maven-repository Apache Web Server http://intranet.acme.com/jboss-eap-maven-repository/ Nexus Repository Manager https://intranet.acme.com/nexus/content/repositories/jboss-eap-maven-repository Configuring the Project's POM File Open your project's pom.xml file in a text editor. Add the following repository configuration. If there is already a <repositories> configuration in the file, then add the <repository> element to it. Be sure to change the <url> to the actual repository location. <repositories> <repository> <id>jboss-eap-repository-group</id> <name>JBoss EAP Maven Repository</name> <url> JBOSS_EAP_REPOSITORY_URL </url> <layout>default</layout> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </snapshots> </repository> </repositories> Add the following plug-in repository configuration. If there is already a <pluginRepositories> configuration in the file, then add the <pluginRepository> element to it. <pluginRepositories> <pluginRepository> <id>jboss-eap-repository-group</id> <name>JBoss EAP Maven Repository</name> <url> JBOSS_EAP_REPOSITORY_URL </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> </pluginRepositories> Determine the URL of the JBoss EAP Repository The repository URL depends on where the repository is located. You can configure Maven to use any of the following repository locations. To use the online JBoss EAP Maven repository, specify the following URL: https://maven.repository.redhat.com/ga/ To use a JBoss EAP Maven repository installed on the local file system, you must download the repository and then use the local file path for the URL. For example: file:///path/to/repo/jboss-eap-7.4.0-maven-repository/maven-repository/ If you install the repository on an Apache Web Server, the repository URL will be similar to the following: http://intranet.acme.com/jboss-eap-7.4.0-maven-repository/maven-repository/ If you install the JBoss EAP Maven repository using the Nexus Repository Manager, the URL will look something like the following: https://intranet.acme.com/nexus/content/repositories/jboss-eap-7.4.0-maven-repository/maven-repository/ Note Remote repositories are accessed using common protocols such as http:// for a repository on an HTTP server or file:// for a repository on a file server. 2.3.2. Configure Maven for Use with Red Hat CodeReady Studio The artifacts and dependencies needed to build and deploy applications to Red Hat JBoss Enterprise Application Platform are hosted on a public repository. You must direct Maven to use this repository when you build your applications. This section covers the steps to configure Maven if you plan to build and deploy applications using Red Hat CodeReady Studio. Maven is distributed with Red Hat CodeReady Studio, so it is not necessary to install it separately. However, you must configure Maven for use by the Java EE Web Project wizard for deployments to JBoss EAP. The procedure below demonstrates how to configure Maven for use with JBoss EAP by editing the Maven configuration file from within Red Hat CodeReady Studio. Note If you set the Target runtime to 7.4 or a later runtime version in Red Hat CodeReady Studio, your project is compatible with the Jakarta EE 8 specification. Configure Maven in Red Hat CodeReady Studio Click Window Preferences , expand JBoss Tools and select JBoss Maven Integration . Figure 2.1. JBoss Maven Integration Pane in the Preferences Window Click Configure Maven Repositories . Click Add Repository to configure the JBoss Enterprise Maven repository. Complete the Add Maven Repository dialog as follows: Set the Profile ID , Repository ID , and Repository Name values to jboss-ga-repository . Set the Repository URL value to http://maven.repository.redhat.com/ga . Click the Active by default checkbox to enable the Maven repository. Click OK . Figure 2.2. Add Maven Repository Review the repositories and click Finish . You are prompted with the message "Are you sure you want to update the file MAVEN_HOME/settings.xml ?" . Click Yes to update the settings. Click OK to close the dialog. The JBoss EAP Maven repository is now configured for use with Red Hat CodeReady Studio. 2.3.3. Manage Project Dependencies This section describes the usage of Bill of Materials (BOM) POMs for Red Hat JBoss Enterprise Application Platform. A BOM is a Maven pom.xml (POM) file that specifies the versions of all runtime dependencies for a given module. Version dependencies are listed in the dependency management section of the file. A project uses a BOM by adding its groupId:artifactId:version (GAV) to the dependency management section of the project pom.xml file and specifying the <scope>import</scope> and <type>pom</type> element values. Note In many cases, dependencies in project POM files use the provided scope. This is because these classes are provided by the application server at runtime and it is not necessary to package them with the user application. Supported Maven Artifacts As part of the product build process, all runtime components of JBoss EAP are built from source in a controlled environment. This helps to ensure that the binary artifacts do not contain any malicious code, and that they can be supported for the life of the product. These artifacts can be easily identified by the -redhat version qualifier, for example 1.0.0-redhat-1 . Adding a supported artifact to the build configuration pom.xml file ensures that the build is using the correct binary artifact for local building and testing. Note that an artifact with a -redhat version is not necessarily part of the supported public API, and might change in future revisions. For information about the public supported API, see the Javadoc documentation included in the release. For example, to use the supported version of Hibernate, add something similar to the following to your build configuration. <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>5.3.1.Final-redhat-1</version> <scope>provided</scope> </dependency> Notice that the above example includes a value for the <version/> field. However, it is recommended to use Maven dependency management for configuring dependency versions. Dependency Management Maven includes a mechanism for managing the versions of direct and transitive dependencies throughout the build. For general information about using dependency management, see the Apache Maven Project: Introduction to the Dependency Mechanism . Using one or more supported Red Hat dependencies directly in your build does not guarantee that all transitive dependencies of the build will be fully supported Red Hat artifacts. It is common for Maven builds to use a mix of artifact sources from the Maven central repository and other Maven repositories. There is a dependency management BOM included in the JBoss EAP Maven repository, which specifies all the supported JBoss EAP binary artifacts. This BOM can be used in a build to ensure that Maven will prioritize supported JBoss EAP dependencies for all direct and transitive dependencies in the build. In other words, transitive dependencies will be managed to the correct supported dependency version where applicable. The version of this BOM matches the version of the JBoss EAP release. <dependencyManagement> <dependencies> ... <dependency> <groupId>org.jboss.bom</groupId> <artifactId>eap-runtime-artifacts</artifactId> <version>7.4.0.GA</version> <type>pom</type> <scope>import</scope> </dependency> ... </dependencies> </dependencyManagement> Note In JBoss EAP 7 the name of this BOM was changed from eap6-supported-artifacts to eap-runtime-artifacts . The purpose of this change is to make it more clear that the artifacts in this POM are part of the JBoss EAP runtime, but are not necessarily part of the supported public API. Some of the JARs contain internal API and functionality, which might change between releases. JBoss EAP Jakarta EE Specs BOM The jboss-jakartaee-8.0 BOM contains the Jakarta EE Specification API JARs used by JBoss EAP. To use this BOM in a project, first add a dependency for the jboss-jakartaee-8.0 BOM in the dependencyManagement section of the POM file, specifying org.jboss.spec for its groupId , and then add the dependencies for the specific APIs needed by the application. These dependencies do not require a version and use a scope of provided because the APIs are included in the jboss-jakartaee-8.0 BOM. The following example uses the 1.0.0.Alpha1 version of the jboss-jakartaee-8.0 BOM, to add dependencies for the Servlet and Jakarta Server Pages APIs. <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.spec</groupId> <artifactId>jboss-jakartaee-8.0</artifactId> <version>1.0.0.Alpha1</version> <type>pom</type> <scope>import</scope> </dependency> ... </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.spec.javax.servlet</groupId> <artifactId>jboss-servlet-api_4.0_spec</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.spec.javax.servlet.jsp</groupId> <artifactId>jboss-jsp-api_2.3_spec</artifactId> <scope>provided</scope> </dependency> ... </dependencies> Note JBoss EAP packages and provides BOMs for the APIs of most of the product components. Many of these BOMs are conveniently packaged into one larger jboss-eap-jakartaee8 BOM with a groupId of org.jboss.bom . The jboss-jakartaee-8.0 BOM, whose groupId is org.jboss.spec , is included in this larger BOM. This means that if you are using additional JBoss EAP dependencies that are packaged in this BOM, you can just add the one jboss-eap-jakartaee8 BOM to your project's POM file rather than separately adding the jboss-jakartaee-8.0 and other BOM dependencies. JBoss EAP BOMs Available for Application Development The following table lists the Maven BOMs that are available for application development. Table 2.1. JBoss BOMs BOM Artifact ID Use Case eap-runtime-artifacts Supported JBoss EAP runtime artifacts. jboss-eap-jakartaee8 Supported JBoss EAP Jakarta EE 8 APIs plus additional JBoss EAP API JARs. jboss-eap-jakartaee8-with-tools jboss-eap-jakartaee8 plus development tools such as Arquillian. Note These BOMs from JBoss EAP 6 have been consolidated into fewer BOMs to make usage simpler for most use cases. The Hibernate, logging, transactions, messaging, and other public API JARs are now included in the jboss-eap-jakartaee8 BOM instead of a requiring a separate BOM for each case. The following example uses the 7.4.0.GA version of the jboss-eap-jakartaee8 BOM. <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-jakartaee8</artifactId> <version>7.4.0.GA</version> <type>pom</type> <scope>import</scope> </dependency> ... </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <scope>provided</scope> </dependency> ... </dependencies> JBoss EAP Client BOMs The client BOMs do not create a dependency management section or define dependencies. Instead, they are an aggregate of other BOMs and are used to package the set of dependencies necessary for a remote client use case. The wildfly-ejb-client-bom , wildfly-jms-client-bom , and wildfly-jaxws-client-bom are managed by the jboss-eap-jakartaee8 BOM, so you do not need to manage the versions in your project dependencies. The following is an example of how to add the wildfly-ejb-client-bom , wildfly-jms-client-bom , and wildfly-jaxws-client-bom dependencies to your project. <dependencyManagement> <dependencies> <!-- JBoss stack of the Jakarta EE APIs and related components. --> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-jakartaee8</artifactId> <version>7.4.0.GA</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> ... </dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.eap</groupId> <artifactId>wildfly-ejb-client-bom</artifactId> <type>pom</type> </dependency> <dependency> <groupId>org.jboss.eap</groupId> <artifactId>wildfly-jms-client-bom</artifactId> <type>pom</type> </dependency> <dependency> <groupId>org.jboss.eap</groupId> <artifactId>wildfly-jaxws-client-bom</artifactId> <type>pom</type> </dependency> ... </dependencies> For more information about Maven Dependencies and BOM POM files, see Apache Maven Project - Introduction to the Dependency Mechanism .
[ "<project> <modelVersion>4.0.0</modelVersion> <groupId>com.jboss.app</groupId> <artifactId>my-app</artifactId> <version>1</version> </project>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <settings xmlns=\"http://maven.apache.org/SETTINGS/1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd\"> <profiles> <!-- Configure the JBoss EAP Maven repository --> <profile> <id>jboss-eap-maven-repository</id> <repositories> <repository> <id>jboss-eap</id> <url>file:///path/to/repo/jboss-eap-7.4.0-maven-repository/maven-repository</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-eap-maven-plugin-repository</id> <url>file:///path/to/repo/jboss-eap-7.4.0-maven-repository/maven-repository</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <!-- Optionally, make the repository active by default --> <activeProfile>jboss-eap-maven-repository</activeProfile> </activeProfiles> </settings>", "java -jar offliner.jar -r http://repository.redhat.com/ga/ -d DOWNLOAD_FOLDER jboss-eap-7.4.0-maven-repository-content-with-sha256-checksums.txt", "<!-- Configure the JBoss Enterprise Maven repository --> <profile> <id>jboss-enterprise-maven-repository</id> <repositories> <repository> <id>jboss-enterprise-maven-repository</id> <url> JBOSS_EAP_REPOSITORY_URL </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-enterprise-maven-repository</id> <url> JBOSS_EAP_REPOSITORY_URL </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>", "<!-- Configure the JBoss Enterprise Maven repository --> <profile> <id>jboss-enterprise-maven-repository</id> <repositories> <repository> <id>jboss-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>", "<activeProfile>jboss-enterprise-maven-repository</activeProfile>", "<repositories> <repository> <id>jboss-eap-repository-group</id> <name>JBoss EAP Maven Repository</name> <url> JBOSS_EAP_REPOSITORY_URL </url> <layout>default</layout> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </snapshots> </repository> </repositories>", "<pluginRepositories> <pluginRepository> <id>jboss-eap-repository-group</id> <name>JBoss EAP Maven Repository</name> <url> JBOSS_EAP_REPOSITORY_URL </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> </pluginRepositories>", "<dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>5.3.1.Final-redhat-1</version> <scope>provided</scope> </dependency>", "<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>eap-runtime-artifacts</artifactId> <version>7.4.0.GA</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>", "<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.spec</groupId> <artifactId>jboss-jakartaee-8.0</artifactId> <version>1.0.0.Alpha1</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.spec.javax.servlet</groupId> <artifactId>jboss-servlet-api_4.0_spec</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.spec.javax.servlet.jsp</groupId> <artifactId>jboss-jsp-api_2.3_spec</artifactId> <scope>provided</scope> </dependency> </dependencies>", "<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-jakartaee8</artifactId> <version>7.4.0.GA</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <scope>provided</scope> </dependency> </dependencies>", "<dependencyManagement> <dependencies> <!-- JBoss stack of the Jakarta EE APIs and related components. --> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-jakartaee8</artifactId> <version>7.4.0.GA</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.eap</groupId> <artifactId>wildfly-ejb-client-bom</artifactId> <type>pom</type> </dependency> <dependency> <groupId>org.jboss.eap</groupId> <artifactId>wildfly-jms-client-bom</artifactId> <type>pom</type> </dependency> <dependency> <groupId>org.jboss.eap</groupId> <artifactId>wildfly-jaxws-client-bom</artifactId> <type>pom</type> </dependency> </dependencies>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/using_maven_with_eap
6.5. Installing Red Hat Directory Server
6.5. Installing Red Hat Directory Server Certificate System uses Red Hat Directory Server to store system certificates and user data. You can install both Directory Server and Certificate System on the same or any other host in the network. Important FIPS mode must be enabled on the RHEL host before you install Directory Server. To ensure FIPS mode is enabled: If the returned value is 1 , FIPS mode is enabled. 6.5.1. Preparing a Directory Server Instance for Certificate System Perform the following steps to install Red Hat Directory Server: Make sure you have attached a subscription that provides Directory Server to the host. Enable the Directory Server repository: Install the Directory Server and the openldap-clients packages: Set up a Directory Server instance. Generate a DS configuration file; for example, /tmp/ds-setup.inf : Customize the DS configuration file as follows: Create the instance using the dscreate command with the setup configuration file: For a detailed procedure, see the Red Hat Directory Server Installation Guide . 6.5.2. Preparing for Configuring Certificate System In Section 7.3, "Understanding the pkispawn Utility" , if you chose to set up TLS between Certificate System and Directory Server, use the following parameters in the configuration file you pass to the pkispawn utility when installing Certificate System: Note We need to first create a basic TLS server authentication connection. At the end, during post-installation, we will return and make the connection require a client authentication certificate to be presented to Directory Server. At that time, once client authentication is set up, the pki_ds_password would no longer be relevant. The value of the pki_ds_database parameter is a name used by the pkispawn utility to create the corresponding subsystem database on the Directory Server instance. The value of the pki_ds_hostname parameter depends on the install location of the Directory Server instance. This depends on the values used in Section 6.5.1, "Preparing a Directory Server Instance for Certificate System" . When you set pki_ds_secure_connection=True , the following parameters must be set: pki_ds_secure_connection_ca_pem_file : Sets the fully-qualified path including the file name of the file which contains an exported copy of the Directory Server's CA certificate. This file must exist prior to pkispawn being able to utilize it. pki_ds_ldaps_port : Sets the value of the secure LDAPS port Directory Server is listening to. The default is 636 .
[ "sysctl crypto.fips_enabled", "subscription-manager repos --enable=dirsrv-11.7-for-rhel-8-x86_64-rpms", "dnf module install redhat-ds", "dnf install openldap-clients", "dscreate create-template /tmp/ds-setup.inf", "sed -i -e \"s/;instance_name = .*/instance_name = localhost/g\" -e \"s/;root_password = .*/root_password = Secret.123/g\" -e \"s/;suffix = .*/suffix = dc=example,dc=com/g\" -e \"s/;create_suffix_entry = .*/create_suffix_entry = True/g\" -e \"s/;self_sign_cert = .*/self_sign_cert = False/g\" /tmp/ds-setup.inf", "dscreate from-file /tmp/ds-setup.inf", "pki_ds_database= back_end_database_name pki_ds_hostname= host_name pki_ds_secure_connection=True pki_ds_secure_connection_ca_pem_file= path_to_CA_or_self-signed_certificate pki_ds_password= password pki_ds_ldaps_port= port pki_ds_bind_dn=cn=Directory Manager" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/installing_RHDS
Chapter 7. Managing user accounts using Ansible playbooks
Chapter 7. Managing user accounts using Ansible playbooks You can manage users in IdM using Ansible playbooks. After presenting the user life cycle , this chapter describes how to use Ansible playbooks for the following operations: Ensuring the presence of a single user listed directly in the YML file. Ensuring the presence of multiple users listed directly in the YML file. Ensuring the presence of multiple users listed in a JSON file that is referenced from the YML file. Ensuring the absence of users listed directly in the YML file. 7.1. User life cycle Identity Management (IdM) supports three user account states: Stage users are not allowed to authenticate. This is an initial state. Some of the user account properties required for active users cannot be set, for example, group membership. Active users are allowed to authenticate. All required user account properties must be set in this state. Preserved users are former active users that are considered inactive and cannot authenticate to IdM. Preserved users retain most of the account properties they had as active users, but they are not part of any user groups. You can delete user entries permanently from the IdM database. Important Deleted user accounts cannot be restored. When you delete a user account, all the information associated with the account is permanently lost. A new administrator can only be created by a user with administrator rights, such as the default admin user. If you accidentally delete all administrator accounts, the Directory Manager must create a new administrator manually in the Directory Server. Warning Do not delete the admin user. As admin is a pre-defined user required by IdM, this operation causes problems with certain commands. If you want to define and use an alternative admin user, disable the pre-defined admin user with ipa user-disable admin after you granted admin permissions to at least one different user. Warning Do not add local users to IdM. The Name Service Switch (NSS) always resolves IdM users and groups before resolving local users and groups. This means that, for example, IdM group membership does not work for local users. 7.2. Ensuring the presence of an IdM user using an Ansible playbook The following procedure describes ensuring the presence of a user in IdM using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the data of the user whose presence in IdM you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/add-user.yml file. For example, to create user named idm_user and add Password123 as the user password: --- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_user ipauser: ipaadmin_password: "{{ ipaadmin_password }}" name: idm_user first: Alice last: Acme uid: 1000111 gid: 10011 phone: "+555123457" email: [email protected] passwordexpiration: "2023-01-19 23:59:59" password: "Password123" update_password: on_create You must use the following options to add a user: name : the login name first : the first name string last : the last name string For the full list of available user options, see the /usr/share/doc/ansible-freeipa/README-user.md Markdown file. Note If you use the update_password: on_create option, Ansible only creates the user password when it creates the user. If the user is already created with a password, Ansible does not generate a new password. Run the playbook: Verification You can verify if the new user account exists in IdM by using the ipa user-show command: Log into ipaserver as admin: Request a Kerberos ticket for admin: Request information about idm_user : The user named idm_user is present in IdM. 7.3. Ensuring the presence of multiple IdM users using Ansible playbooks The following procedure describes ensuring the presence of multiple users in IdM using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the data of the users whose presence you want to ensure in IdM. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-users-present.yml file. For example, to create users idm_user_1 , idm_user_2 , and idm_user_3 , and add Password123 as the password of idm_user_1 : --- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_users ipauser: ipaadmin_password: "{{ ipaadmin_password }}" users: - name: idm_user_1 first: Alice last: Acme uid: 10001 gid: 10011 phone: "+555123457" email: [email protected] passwordexpiration: "2023-01-19 23:59:59" password: "Password123" - name: idm_user_2 first: Bob last: Acme uid: 100011 gid: 10011 - name: idm_user_3 first: Eve last: Acme uid: 1000111 gid: 10011 Note If you do not specify the update_password: on_create option, Ansible re-sets the user password every time the playbook is run: if the user has changed the password since the last time the playbook was run, Ansible re-sets password. Run the playbook: Verification You can verify if the user account exists in IdM by using the ipa user-show command: Log into ipaserver as administrator: Display information about idm_user_1 : The user named idm_user_1 is present in IdM. 7.4. Ensuring the presence of multiple IdM users from a JSON file using Ansible playbooks The following procedure describes how you can ensure the presence of multiple users in IdM using an Ansible playbook. The users are stored in a JSON file. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary tasks. Reference the JSON file with the data of the users whose presence you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/README-user.md file: Create the users.json file, and add the IdM users into it. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/README-user.md file. For example, to create users idm_user_1 , idm_user_2 , and idm_user_3 , and add Password123 as the password of idm_user_1 : { "users": [ { "name": "idm_user_1", "first": "First 1", "last": "Last 1", "password": "Password123" }, { "name": "idm_user_2", "first": "First 2", "last": "Last 2" }, { "name": "idm_user_3", "first": "First 3", "last": "Last 3" } ] } Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification You can verify if the user accounts are present in IdM using the ipa user-show command: Log into ipaserver as administrator: Display information about idm_user_1 : The user named idm_user_1 is present in IdM. 7.5. Ensuring the absence of users using Ansible playbooks The following procedure describes how you can use an Ansible playbook to ensure that specific users are absent from IdM. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the users whose absence from IdM you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-users-present.yml file. For example, to delete users idm_user_1 , idm_user_2 , and idm_user_3 : --- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Delete users idm_user_1, idm_user_2, idm_user_3 ipauser: ipaadmin_password: "{{ ipaadmin_password }}" users: - name: idm_user_1 - name: idm_user_2 - name: idm_user_3 state: absent Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification You can verify that the user accounts do not exist in IdM by using the ipa user-show command: Log into ipaserver as administrator: Request information about idm_user_1 : The user named idm_user_1 does not exist in IdM. 7.6. Additional resources See the README-user.md Markdown file in the /usr/share/doc/ansible-freeipa/ directory. See sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/user directory.
[ "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_user ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm_user first: Alice last: Acme uid: 1000111 gid: 10011 phone: \"+555123457\" email: [email protected] passwordexpiration: \"2023-01-19 23:59:59\" password: \"Password123\" update_password: on_create", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-IdM-user.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa user-show idm_user User login: idm_user First name: Alice Last name: Acme .", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_users ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: - name: idm_user_1 first: Alice last: Acme uid: 10001 gid: 10011 phone: \"+555123457\" email: [email protected] passwordexpiration: \"2023-01-19 23:59:59\" password: \"Password123\" - name: idm_user_2 first: Bob last: Acme uid: 100011 gid: 10011 - name: idm_user_3 first: Eve last: Acme uid: 1000111 gid: 10011", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-users.yml", "ssh [email protected] Password: [admin@server /]USD", "ipa user-show idm_user_1 User login: idm_user_1 First name: Alice Last name: Acme Password: True .", "[ipaserver] server.idm.example.com", "--- - name: Ensure users' presence hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Include users_present.json include_vars: file: users_present.json - name: Users present ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: \"{{ users }}\"", "{ \"users\": [ { \"name\": \"idm_user_1\", \"first\": \"First 1\", \"last\": \"Last 1\", \"password\": \"Password123\" }, { \"name\": \"idm_user_2\", \"first\": \"First 2\", \"last\": \"Last 2\" }, { \"name\": \"idm_user_3\", \"first\": \"First 3\", \"last\": \"Last 3\" } ] }", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file path_to_playbooks_directory /ensure-users-present-jsonfile.yml", "ssh [email protected] Password: [admin@server /]USD", "ipa user-show idm_user_1 User login: idm_user_1 First name: Alice Last name: Acme Password: True .", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Delete users idm_user_1, idm_user_2, idm_user_3 ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: - name: idm_user_1 - name: idm_user_2 - name: idm_user_3 state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file path_to_playbooks_directory /delete-users.yml", "ssh [email protected] Password: [admin@server /]USD", "ipa user-show idm_user_1 ipa: ERROR: idm_user_1: user not found" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/managing-user-accounts-using-Ansible-playbooks_using-ansible-to-install-and-manage-idm
Chapter 1. Upgrading Red Hat build of Keycloak
Chapter 1. Upgrading Red Hat build of Keycloak This guide describes how to upgrade Red Hat build of Keycloak from version 22.0.x to version 24.0.10. Use the following procedures in this order: Review the migration changes from the version of Red Hat build of Keycloak. Upgrade the Red Hat build of Keycloak server. Upgrade the Red Hat build of Keycloak adapters. Upgrade the Red Hat build of Keycloak Admin Client. For Red Hat Single Sign-On 7.6 customers, use the Migration Guide instead of this guide.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/upgrading_guide/intro
Chapter 18. Installing on OpenStack
Chapter 18. Installing on OpenStack 18.1. Preparing to install on OpenStack You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). 18.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 18.1.2. Choosing a method to install OpenShift Container Platform on OpenStack You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 18.1.2.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Red Hat OpenStack Platform (RHOSP) infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster on OpenStack with customizations : You can install a customized cluster on RHOSP. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on OpenStack with Kuryr : You can install a customized OpenShift Container Platform cluster on RHOSP that uses Kuryr SDN. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Installing a cluster on OpenStack in a restricted network : You can install OpenShift Container Platform on RHOSP in a restricted or disconnected network by creating an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 18.1.2.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on RHOSP infrastructure that you provision, by using one of the following methods: Installing a cluster on OpenStack on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure. By using this installation method, you can integrate your cluster with existing infrastructure and modifications. For installations on user-provisioned infrastructure, you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. You can use the provided Ansible playbooks to assist with the deployment process. Installing a cluster on OpenStack with Kuryr on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure that uses Kuryr SDN. Installing a cluster on OpenStack on your own SR-IOV infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure that uses single-root input/output virtualization (SR-IOV) networks to run compute machines. 18.1.3. Scanning RHOSP endpoints for legacy HTTPS certificates Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. Run the following script to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the provided script to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Prerequisites On the machine where you run the script, have the following software: Bash version 4.0 or greater grep OpenStack client jq OpenSSL version 1.1.1l or greater Populate the machine with RHOSP credentials for the target cloud. Procedure Save the following script to your machine: #!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog="USD(mktemp)" san="USD(mktemp)" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints \ | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface=="public") | [USDname, .interface, .url] | join(" ")' \ | sort \ > "USDcatalog" while read -r name interface url; do # Ignore HTTP if [[ USD{url#"http://"} != "USDurl" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#"https://"} # If the schema was not HTTPS, error if [[ "USDnoschema" == "USDurl" ]]; then echo "ERROR (unknown schema): USDname USDinterface USDurl" exit 2 fi # Remove the path and only keep host and port noschema="USD{noschema%%/*}" host="USD{noschema%%:*}" port="USD{noschema##*:}" # Add the port if was implicit if [[ "USDport" == "USDhost" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName \ > "USDsan" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ "USD(grep -c "Subject Alternative Name" "USDsan" || true)" -gt 0 ]]; then echo "PASS: USDname USDinterface USDurl" else invalid=USD((invalid+1)) echo "INVALID: USDname USDinterface USDurl" fi done < "USDcatalog" # clean up temporary files rm "USDcatalog" "USDsan" if [[ USDinvalid -gt 0 ]]; then echo "USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field." exit 1 else echo "All HTTPS certificates for this cloud are valid." fi Run the script. Replace any certificates that the script reports as INVALID with certificates that contain SAN fields. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates will be rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead 18.1.3.1. Scanning RHOSP endpoints for legacy HTTPS certificates manually Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. If you do not have access to the prerequisite tools that are listed in "Scanning RHOSP endpoints for legacy HTTPS certificates", perform the following steps to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the following steps to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Procedure On a command line, run the following command to view the URL of RHOSP public endpoints: USD openstack catalog list Record the URL for each HTTPS endpoint that the command returns. For each public endpoint, note the host and the port. Tip Determine the host of an endpoint by removing the scheme, the port, and the path. For each endpoint, run the following commands to extract the SAN field of the certificate: Set a host variable: USD host=<host_name> Set a port variable: USD port=<port_number> If the URL of the endpoint does not have a port, use the value 443 . Retrieve the SAN field of the certificate: USD openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName Example output X509v3 Subject Alternative Name: DNS:your.host.example.net For each endpoint, look for output that resembles the example. If there is no output for an endpoint, the certificate of that endpoint is invalid and must be re-issued. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates are rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead 18.2. Installing a cluster on OpenStack with customizations In OpenShift Container Platform version 4.10, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 18.2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.10 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended host practices . You have the metadata service enabled in RHOSP. 18.2.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 18.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 18.2.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.2.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 18.2.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.2.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.2.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 18.2.5. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 18.2.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 18.2.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 18.2.8. Setting cloud provider options Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options based on the cloud configuration specification. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #... 1 This property enables Octavia integration. 2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 4 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.1 and 16.2, this feature is only available for the Amphora provider. 5 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 7 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.1 and 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 18.2.8.1. External load balancers that use pre-defined floating IP addresses Commonly, Red Hat OpenStack Platform (RHOSP) deployments disallow non-administrator users from creating specific floating IP addresses. If such a policy is in place and you use a floating IP address in your service specification, the cloud provider will fail to handle IP address assignment to load balancers. If you use an external cloud provider, you can avoid this problem by pre-creating a floating IP address and specifying it in your service specification. The in-tree cloud provider does not support this method. Alternatively, you can modify the RHOSP Networking service (Neutron) to allow non-administrator users to create specific floating IP addresses . Additional resources For more information about cloud provider configuration, see OpenStack cloud provider options . 18.2.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 18.2.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Installation configuration parameters section for more information about the available parameters. 18.2.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 18.2.11. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 18.2.11.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 18.2. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 18.2.11.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 18.3. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 18.2.11.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 18.4. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 18.2.11.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 18.5. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 18.2.11.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 18.6. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 18.2.11.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 18.2.11.7. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an install-config.yaml file as part of the OpenShift Container Platform installation process. Procedure In the install-config.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor. Change the value of compute.platform.openstack.type to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet. An example bare metal install-config.yaml file controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. 3 If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet. Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 18.2.11.8. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 18.2.11.8.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 18.2.11.8.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 18.2.11.9. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 18.2.12. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.2.13. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 18.2.13.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 18.2.13.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 18.2.14. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 18.2.15. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 18.2.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 18.2.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.2.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 18.3. Installing a cluster on OpenStack with Kuryr In OpenShift Container Platform version 4.10, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP) that uses Kuryr SDN. To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 18.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.10 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended host practices . 18.3.2. About Kuryr SDN Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 18.3.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 18.7. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Server groups 2 - plus 1 for each additional availability zone in each machine pool Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 18.3.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 18.3.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 18.3.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you must increase their listeners' default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud deploy command: (undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000 Note This is not needed for RHOSP 13.0.13+. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. In RHOSP versions earlier than 13.0.13, add the project ID to the octavia.conf configuration file after you create the project. To enforce network policies across services, like when traffic goes through the Octavia load balancer, you must ensure Octavia creates the Amphora VM security groups on the user project. This change ensures that required load balancer security groups belong to that project, and that they can be updated to enforce services isolation. Note This task is unnecessary in RHOSP version 13.0.13 or later. Octavia implements a new ACL API that restricts access to the load balancers VIP. Get the project ID USD openstack project show <project> Example output +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+ Add the project ID to octavia.conf for the controllers. Source the stackrc file: USD source stackrc # Undercloud credentials List the Overcloud controllers: USD openstack server list Example output +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ SSH into the controller(s). USD ssh [email protected] Edit the octavia.conf file to add the project into the list of projects where Amphora security groups are on the user's account. Restart the Octavia worker so the new configuration loads. controller-0USD sudo docker restart octavia_worker Note Depending on your RHOSP environment, Octavia might not support UDP listeners. If you use Kuryr SDN on RHOSP version 13.0.13 or earlier, UDP services are not supported. RHOSP version 16 or later support UDP. 18.3.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. You can configure your cluster to use the Octavia OVN driver after your RHOSP cloud is upgraded from version 13 to version 16. 18.3.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Octavia RHOSP versions before 13.0.13 do not support UDP listeners. Therefore, OpenShift Container Platform UDP services are not supported. Octavia RHOSP versions before 13.0.13 cannot listen to multiple protocols on the same port. Services that expose the same port to different protocols, like TCP and UDP, are not supported. Kuryr SDN does not support automatic unidling by a service. RHOSP environment limitations There are limitations when using Kuryr SDN that depend on your deployment environment. Because of Octavia's lack of support for the UDP protocol and multiple listeners, if the RHOSP version is earlier than 13.0.13, Kuryr forces pods to use TCP for DNS resolution. In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case, the native Go resolver does not recognize the use-vc option in resolv.conf , which controls whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails. To ensure that TCP forcing is allowed, compile applications either with the environment variable CGO_ENABLED set to 1 , i.e. CGO_ENABLED=1 , or ensure that the variable is absent. In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails. Note musl-based containers, including Alpine-based containers, do not support the use-vc option. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. Important If OpenShift Container Platform detects a new Octavia version that supports UDP load balancing, it recreates the DNS service automatically. The service recreation ensures that the service default supports UDP load balancing. The recreation causes the DNS service approximately one minute of downtime. 18.3.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.3.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 18.3.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.3.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.3.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 18.3.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 18.3.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 18.3.8. Setting cloud provider options Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options based on the cloud configuration specification. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #... 1 This property enables Octavia integration. 2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 4 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.1 and 16.2, this feature is only available for the Amphora provider. 5 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 7 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.1 and 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 18.3.8.1. External load balancers that use pre-defined floating IP addresses Commonly, Red Hat OpenStack Platform (RHOSP) deployments disallow non-administrator users from creating specific floating IP addresses. If such a policy is in place and you use a floating IP address in your service specification, the cloud provider will fail to handle IP address assignment to load balancers. If you use an external cloud provider, you can avoid this problem by pre-creating a floating IP address and specifying it in your service specification. The in-tree cloud provider does not support this method. Alternatively, you can modify the RHOSP Networking service (Neutron) to allow non-administrator users to create specific floating IP addresses . Additional resources For more information about cloud provider configuration, see OpenStack cloud provider options . 18.3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 18.3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 18.3.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note Kuryr installations default to HTTP proxies. Prerequisites For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter: USD ip route add <cluster_network_cidr> via <installer_subnet_gateway> The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates. You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 18.3.11. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 18.3.11.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 18.8. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 18.3.11.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 18.9. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 18.3.11.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 18.10. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 18.3.11.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 18.11. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 18.3.11.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 18.12. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 18.3.11.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 18.3.11.7. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 3 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 18.3.11.8. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 18.3.11.8.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 18.3.11.8.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 18.3.11.9. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 18.3.11.10. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 18.3.12. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.3.13. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 18.3.13.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 18.3.13.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 18.3.14. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 18.3.15. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 18.3.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 18.3.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.3.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 18.4. Installing a cluster on OpenStack that supports SR-IOV-connected compute machines In OpenShift Container Platform version 4.10, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that can use compute machines with single-root I/O virtualization (SR-IOV) technology. 18.4.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Verify that OpenShift Container Platform 4.10 is compatible with your RHOSP version by using the "Supported platforms for OpenShift clusters" section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . Verify that your network configuration does not rely on a provider network. Provider networks are not supported. Have a storage service installed in RHOSP, like block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . Have metadata service enabled in RHOSP 18.4.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 18.13. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 18.4.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.4.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. Additionally, for clusters that use single-root input/output virtualization (SR-IOV), RHOSP compute nodes require a flavor that supports huge pages . Important SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. Additional resources For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 18.4.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.4.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.4.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 18.4.5. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 18.4.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 18.4.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 18.4.8. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 18.4.8.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 18.4.9. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 18.4.9.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 18.14. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 18.4.9.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 18.15. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 18.4.9.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 18.16. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 18.4.9.4. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 18.4.9.5. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 18.4.9.6. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 18.4.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.4.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 18.4.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 18.4.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the file, do not define the following If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 18.4.12. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 18.4.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 18.4.14. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 18.4.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin The cluster is operational. Before you can add SR-IOV compute machines though, you must perform additional tasks. 18.4.16. Preparing a cluster that runs on RHOSP for SR-IOV Before you use single root I/O virtualization (SR-IOV) on a cluster that runs on Red Hat OpenStack Platform (RHOSP), make the RHOSP metadata service mountable as a drive and enable the No-IOMMU Operator for the virtual function I/O (VFIO) driver. 18.4.16.1. Enabling the RHOSP metadata service as a mountable drive You can apply a machine config to your machine pool that makes the Red Hat OpenStack Platform (RHOSP) metadata service available as a mountable drive. The following machine config enables the display of RHOSP network UUIDs from within the SR-IOV Network Operator. This configuration simplifies the association of SR-IOV resources to cluster SR-IOV resources. Procedure Create a machine config file from the following template: A mountable metadata service machine config file kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target 1 You can substitute a name of your choice. From a command line, apply the machine config: USD oc apply -f <machine_config_file_name>.yaml 18.4.16.2. Enabling the No-IOMMU feature for the RHOSP VFIO driver You can apply a machine config to your machine pool that enables the No-IOMMU feature for the Red Hat OpenStack Platform (RHOSP) virtual function I/O (VFIO) driver. The RHOSP vfio-pci driver requires this feature. Procedure Create a machine config file from the following template: A No-IOMMU VFIO machine config file kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK 1 You can substitute a name of your choice. From a command line, apply the machine config: USD oc apply -f <machine_config_file_name>.yaml The cluster is installed and prepared for SR-IOV configuration. Complete the post-installation SR-IOV tasks that are listed in the " steps" section. 18.4.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.4.18. steps To complete SR-IOV configuration for your cluster: Install the Performance Addon Operator . Configure the Performance Addon Operator with huge pages support . Install the SR-IOV Operator . Configure your SR-IOV network device . Add an SR-IOV compute machine set . Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 18.5. Installing a cluster on OpenStack that supports OVS-DPDK-connected compute machines If your Red Hat OpenStack Platform (RHOSP) deployment has Open vSwitch with the Data Plane Development Kit (OVS-DPDK) enabled, you can install an OpenShift Container Platform cluster on it. Clusters that run on such RHOSP deployments use OVS-DPDK features by providing access to poll mode drivers . 18.5.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Verify that OpenShift Container Platform 4.10 is compatible with your RHOSP version by using the "Supported platforms for OpenShift clusters" section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . Have the metadata service enabled in RHOSP. Plan your RHOSP OVS-DPDK deployment by referring to Planning your OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. Configure your RHOSP OVS-DPDK deployment according to Configuring an OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. You must complete Creating a flavor and deploying an instance for OVS-DPDK before you install a cluster on RHOSP. 18.5.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 18.17. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 18.5.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.5.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. Additionally, for clusters that use single-root input/output virtualization (SR-IOV), RHOSP compute nodes require a flavor that supports huge pages . Important SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. Additional resources For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 18.5.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.5.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 18.5.5. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 18.5.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 18.5.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 18.5.8. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 18.5.8.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 18.5.9. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 18.5.9.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 18.18. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 18.5.9.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 18.19. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 18.5.9.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 18.20. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 18.5.9.4. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 18.5.9.5. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 18.5.9.6. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 18.5.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.5.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 18.5.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 18.5.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the file, do not define the following If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 18.5.12. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 18.5.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 18.5.14. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 18.5.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin The cluster is operational. Before you can add OVS-DPDK compute machines though, you must perform additional tasks. 18.5.16. Enabling the RHOSP metadata service as a mountable drive You can apply a machine config to your machine pool that makes the Red Hat OpenStack Platform (RHOSP) metadata service available as a mountable drive. The following machine config enables the display of RHOSP network UUIDs from within the SR-IOV Network Operator. This configuration simplifies the association of SR-IOV resources to cluster SR-IOV resources. Procedure Create a machine config file from the following template: A mountable metadata service machine config file kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target 1 You can substitute a name of your choice. From a command line, apply the machine config: USD oc apply -f <machine_config_file_name>.yaml 18.5.17. Enabling the No-IOMMU feature for the RHOSP VFIO driver You can apply a machine config to your machine pool that enables the No-IOMMU feature for the Red Hat OpenStack Platform (RHOSP) virtual function I/O (VFIO) driver. The RHOSP vfio-pci driver requires this feature. Procedure Create a machine config file from the following template: A No-IOMMU VFIO machine config file kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK 1 You can substitute a name of your choice. From a command line, apply the machine config: USD oc apply -f <machine_config_file_name>.yaml 18.5.18. Binding the vfio-pci kernel driver to NICs Compute machines that connect to a virtual function I/O (VFIO) network require the vfio-pci kernel driver to be bound to the ports that are attached to a configured network. Create a machine set for workers that attach to this VFIO network. Procedure From a command line, retrieve VFIO network UUIDs: USD openstack network show <VFIO_network_name> -f value -c id Create a machine set on your cluster from the following template: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-vhostuser-bind spec: config: ignition: version: 2.2.0 systemd: units: - name: vhostuser-bind.service enabled: true contents: | [Unit] Description=Vhostuser Interface vfio-pci Bind Wants=network-online.target After=network-online.target ignition-firstboot-complete.service [Service] Type=oneshot EnvironmentFile=/etc/vhostuser-bind.conf ExecStart=/usr/local/bin/vhostuser USDARG [Install] WantedBy=multi-user.target storage: files: - contents: inline: vfio-pci filesystem: root mode: 0644 path: /etc/modules-load.d/vfio-pci.conf - contents: inline: | #!/bin/bash set -e if [[ "USD#" -lt 1 ]]; then echo "Nework ID not provided, nothing to do" exit fi source /etc/vhostuser-bind.conf NW_DATA="/var/config/openstack/latest/network_data.json" if [ ! -f USD{NW_DATA} ]; then echo "Network data file not found, trying to download it from nova metadata" if ! curl http://169.254.169.254/openstack/latest/network_data.json > /tmp/network_data.json; then echo "Failed to download network data file" exit 1 fi NW_DATA="/tmp/network_data.json" fi function parseNetwork() { local nwid=USD1 local pcis=() echo "Network ID is USDnwid" links=USD(jq '.networks[] | select(.network_id == "'USDnwid'") | .link' USDNW_DATA) if [ USD{#links} -gt 0 ]; then for link in USDlinks; do echo "Link Name: USDlink" mac=USD(jq -r '.links[] | select(.id == 'USDlink') | .ethernet_mac_address' USDNW_DATA) if [ -n USDmac ]; then pci=USD(bindDriver USDmac) pci_ret=USD? if [[ "USDpci_ret" -eq 0 ]]; then echo "USDpci bind succesful" fi fi done fi } function bindDriver() { local mac=USD1 for file in /sys/class/net/*; do dev_mac=USD(cat USDfile/address) if [[ "USDmac" == "USDdev_mac" ]]; then name=USD{file##*\/} bus_str=USD(ethtool -i USDname | grep bus) dev_t=USD{bus_str#*:} dev=USD{dev_t#[[:space:]]} echo USDdev devlink="/sys/bus/pci/devices/USDdev" syspath=USD(realpath "USDdevlink") if [ ! -f "USDsyspath/driver/unbind" ]; then echo "File USDsyspath/driver/unbind not found" return 1 fi if ! echo "USDdev">"USDsyspath/driver/unbind"; then return 1 fi if [ ! -f "USDsyspath/driver_override" ]; then echo "File USDsyspath/driver_override not found" return 1 fi if ! echo "vfio-pci">"USDsyspath/driver_override"; then return 1 fi if [ ! -f "/sys/bus/pci/drivers/vfio-pci/bind" ]; then echo "File /sys/bus/pci/drivers/vfio-pci/bind not found" return 1 fi if ! echo "USDdev">"/sys/bus/pci/drivers/vfio-pci/bind"; then return 1 fi return 0 fi done return 1 } for nwid in "USD@"; do parseNetwork USDnwid done filesystem: root mode: 0744 path: /usr/local/bin/vhostuser - contents: inline: | ARG="be22563c-041e-44a0-9cbd-aa391b439a39,ec200105-fb85-4181-a6af-35816da6baf7" 1 filesystem: root mode: 0644 path: /etc/vhostuser-bind.conf 1 Replace this value with a comma-separated list of VFIO network UUIDs. On boot for machines that are part of this set, the MAC addresses of ports are translated into PCI bus IDs. The vfio-pci module is bound to any port that is assocated with a network that is identified by the RHOSP network ID. Verification On a compute node, from a command line, retrieve the name of the node by entering: USD oc get nodes Create a shell to debug the node: USD oc debug node/<node_name> Change the root directory for the current running process: USD chroot /host Enter the following command to list the kernel drivers that are handling each device on your machine: USD lspci -k Example output 00:07.0 Ethernet controller: Red Hat, Inc. Virtio network device Subsystem: Red Hat, Inc. Device 0001 Kernel driver in use: vfio-pci In the output of the command, VFIO ethernet controllers use the vfio-pci kernel driver. 18.5.19. Exposing the host-device interface to the pod You can use the Container Network Interface (CNI) plugin to expose an interface that is on the host to the pod. The plugin moves the interface from the namespace of the host network to the namespace of the pod. The pod then has direct control of the interface. Procedure Create an additional network attachment with the host-device CNI plugin by using the following object as an example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: vhostuser1 namespace: default spec: config: '{ "cniVersion": "0.3.1", "name": "hostonly", "type": "host-device", "pciBusId": "0000:00:04.0", "ipam": { } }' Verification From a command line, run the following command to see if networks are created in the namespace: USD oc -n <your_cnf_namespace> get net-attach-def Additional resources Creating an additional network attachment with the Cluster Network Operator The cluster is installed and prepared for configuration. You must now perform the OVS-DPDK configuration tasks in steps . 18.5.20. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.5.21. Additional resources See Performance Addon Operator for low latency nodes for information about configuring your deployment for real-time running and low latency. 18.5.22. steps To complete OVS-DPDK configuration for your cluster: Install the Performance Addon Operator . Configure the Performance Addon Operator with huge pages support . Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 18.6. Installing a cluster on OpenStack on your own infrastructure In OpenShift Container Platform version 4.10, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 18.6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.10 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended host practices . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 18.6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.6.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 18.21. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 18.6.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.6.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 18.6.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.6.4. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 18.6.5. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 18.6.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 18.6.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.6.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.10 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 18.6.9. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 18.6.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 18.6.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 18.6.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 18.6.11. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 18.6.12. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 18.6.13. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 18.6.13.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 18.22. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 18.6.13.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 18.23. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 18.6.13.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 18.24. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 18.6.13.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 18.25. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 18.6.13.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 18.26. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 18.6.13.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 18.6.13.7. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 18.6.13.8. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 18.6.13.9. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 18.6.13.10. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 18.6.13.10.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 18.6.13.10.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 18.6.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 18.6.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 18.6.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 18.6.17. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform post-installation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" Optionally, you can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. 18.6.17.1. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 18.6.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 18.6.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.23.0 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 18.6.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 18.6.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 18.6.22. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 18.6.23. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 18.6.24. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 18.6.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.6.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 18.7. Installing a cluster on OpenStack with Kuryr on your own infrastructure In OpenShift Container Platform version 4.10, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 18.7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.10 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended host practices . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 18.7.2. About Kuryr SDN Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 18.7.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 18.27. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Server groups 2 - plus 1 for each additional availability zone in each machine pool Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 18.7.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 18.7.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 18.7.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you must increase their listeners' default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud deploy command: (undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000 Note This is not needed for RHOSP 13.0.13+. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. In RHOSP versions earlier than 13.0.13, add the project ID to the octavia.conf configuration file after you create the project. To enforce network policies across services, like when traffic goes through the Octavia load balancer, you must ensure Octavia creates the Amphora VM security groups on the user project. This change ensures that required load balancer security groups belong to that project, and that they can be updated to enforce services isolation. Note This task is unnecessary in RHOSP version 13.0.13 or later. Octavia implements a new ACL API that restricts access to the load balancers VIP. Get the project ID USD openstack project show <project> Example output +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+ Add the project ID to octavia.conf for the controllers. Source the stackrc file: USD source stackrc # Undercloud credentials List the Overcloud controllers: USD openstack server list Example output +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ SSH into the controller(s). USD ssh [email protected] Edit the octavia.conf file to add the project into the list of projects where Amphora security groups are on the user's account. Restart the Octavia worker so the new configuration loads. controller-0USD sudo docker restart octavia_worker Note Depending on your RHOSP environment, Octavia might not support UDP listeners. If you use Kuryr SDN on RHOSP version 13.0.13 or earlier, UDP services are not supported. RHOSP version 16 or later support UDP. 18.7.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. 18.7.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Octavia RHOSP versions before 13.0.13 do not support UDP listeners. Therefore, OpenShift Container Platform UDP services are not supported. Octavia RHOSP versions before 13.0.13 cannot listen to multiple protocols on the same port. Services that expose the same port to different protocols, like TCP and UDP, are not supported. Kuryr SDN does not support automatic unidling by a service. RHOSP environment limitations There are limitations when using Kuryr SDN that depend on your deployment environment. Because of Octavia's lack of support for the UDP protocol and multiple listeners, if the RHOSP version is earlier than 13.0.13, Kuryr forces pods to use TCP for DNS resolution. In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case, the native Go resolver does not recognize the use-vc option in resolv.conf , which controls whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails. To ensure that TCP forcing is allowed, compile applications either with the environment variable CGO_ENABLED set to 1 , i.e. CGO_ENABLED=1 , or ensure that the variable is absent. In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails. Note musl-based containers, including Alpine-based containers, do not support the use-vc option. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. Important If OpenShift Container Platform detects a new Octavia version that supports UDP load balancing, it recreates the DNS service automatically. The service recreation ensures that the service default supports UDP load balancing. The recreation causes the DNS service approximately one minute of downtime. 18.7.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.7.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 18.7.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.7.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.7.5. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 18.7.6. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 18.7.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 18.7.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.7.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.10 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 18.7.10. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 18.7.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 18.7.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 18.7.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 18.7.12. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 18.7.13. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 18.7.14. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 18.7.14.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 18.28. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 18.7.14.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 18.29. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 18.7.14.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 18.30. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 18.7.14.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 18.31. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 18.7.14.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 18.32. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 18.7.14.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 18.7.14.7. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 3 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 18.7.14.8. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 18.7.14.8.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 18.7.14.8.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 18.7.14.9. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 18.7.14.10. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 18.7.14.11. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 18.7.14.12. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 18.7.14.13. Modifying the network type By default, the installation program selects the OpenShiftSDN network type. To use Kuryr instead, change the value in the installation configuration file that the program generated. Prerequisites You have the file install-config.yaml that was generated by the OpenShift Container Platform installation program Procedure In a command prompt, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["networkType"] = "Kuryr"; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set networking.networkType to "Kuryr" . 18.7.15. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 18.7.16. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 18.7.17. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 18.7.18. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform post-installation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" 18.7.19. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 18.7.20. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.23.0 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 18.7.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 18.7.22. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 18.7.23. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 18.7.24. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 18.7.25. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 18.7.26. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.7.27. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 18.8. Installing a cluster on OpenStack on your own SR-IOV infrastructure In OpenShift Container Platform 4.10, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure and uses single-root input/output virtualization (SR-IOV) networks to run compute machines. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, such as Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 18.8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.10 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . Your network configuration does not rely on a provider network. Provider networks are not supported. You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended host practices . On the machine where you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 18.8.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.8.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 18.33. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 18.8.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.8.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. Additionally, for clusters that use single-root input/output virtualization (SR-IOV), RHOSP compute nodes require a flavor that supports huge pages . Important SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. Additional resources For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 18.8.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.8.4. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 18.8.5. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 18.8.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 18.8.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.8.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.10 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 18.8.9. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 18.8.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 18.8.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 18.8.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 18.8.11. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 18.8.12. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 18.8.13. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 18.8.13.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 18.34. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 18.8.13.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 18.35. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 18.8.13.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 18.36. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 18.8.13.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 18.37. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 18.8.13.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 18.38. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 18.8.13.6. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 18.8.13.7. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 18.8.13.8. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 18.8.13.9. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 18.8.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 18.8.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 18.8.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 18.8.17. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform post-installation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" Optionally, you can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. 18.8.17.1. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 18.8.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 18.8.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.23.0 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 18.8.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 18.8.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 18.8.22. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 18.8.23. Creating compute machines that run on SR-IOV networks After standing up the control plane, create compute machines that run on the SR-IOV networks that you created in "Creating SR-IOV networks for compute machines". Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The metadata.yaml file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. You created radio and uplink SR-IOV networks as described in "Creating SR-IOV networks for compute machines". Procedure On a command line, change the working directory to the location of the inventory.yaml and common.yaml files. Add the radio and uplink networks to the end of the inventory.yaml file by using the additionalNetworks parameter: .... # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' additionalNetworks: - id: radio count: 4 1 type: direct port_security_enabled: no - id: uplink count: 4 2 type: direct port_security_enabled: no 1 2 The count parameter defines the number of SR-IOV virtual functions (VFs) to attach to each worker node. In this case, each network has four VFs. Replace the content of the compute-nodes.yaml file with the following text: Example 18.1. compute-nodes.yaml - import_playbook: common.yaml - hosts: all gather_facts: no vars: worker_list: [] port_name_list: [] nic_list: [] tasks: # Create the SDN/primary port for each worker node - name: 'Create the Compute ports' os_port: name: "{{ item.1 }}-{{ item.0 }}" network: "{{ os_network }}" security_groups: - "{{ os_sg_worker }}" allowed_address_pairs: - ip_address: "{{ os_ingressVIP }}" with_indexed_items: "{{ [os_port_worker] * os_compute_nodes_number }}" register: ports # Tag each SDN/primary port with cluster name - name: 'Set Compute ports tag' command: cmd: "openstack port set --tag {{ cluster_id_tag }} {{ item.1 }}-{{ item.0 }}" with_indexed_items: "{{ [os_port_worker] * os_compute_nodes_number }}" - name: 'List the Compute Trunks' command: cmd: "openstack network trunk list" when: os_networking_type == "Kuryr" register: compute_trunks - name: 'Create the Compute trunks' command: cmd: "openstack network trunk create --parent-port {{ item.1.id }} {{ os_compute_trunk_name }}-{{ item.0 }}" with_indexed_items: "{{ ports.results }}" when: - os_networking_type == "Kuryr" - "os_compute_trunk_name|string not in compute_trunks.stdout" - name: 'Call additional-port processing' include_tasks: additional-ports.yaml # Create additional ports in OpenStack - name: 'Create additionalNetworks ports' os_port: name: "{{ item.0 }}-{{ item.1.name }}" vnic_type: "{{ item.1.type }}" network: "{{ item.1.uuid }}" port_security_enabled: "{{ item.1.port_security_enabled|default(omit) }}" no_security_groups: "{{ 'true' if item.1.security_groups is not defined else omit }}" security_groups: "{{ item.1.security_groups | default(omit) }}" with_nested: - "{{ worker_list }}" - "{{ port_name_list }}" # Tag the ports with the cluster info - name: 'Set additionalNetworks ports tag' command: cmd: "openstack port set --tag {{ cluster_id_tag }} {{ item.0 }}-{{ item.1.name }}" with_nested: - "{{ worker_list }}" - "{{ port_name_list }}" # Build the nic list to use for server create - name: Build nic list set_fact: nic_list: "{{ nic_list | default([]) + [ item.name ] }}" with_items: "{{ port_name_list }}" # Create the servers - name: 'Create the Compute servers' vars: worker_nics: "{{ [ item.1 ] | product(nic_list) | map('join','-') | map('regex_replace', '(.*)', 'port-name=\\1') | list }}" os_server: name: "{{ item.1 }}" image: "{{ os_image_rhcos }}" flavor: "{{ os_flavor_worker }}" auto_ip: no userdata: "{{ lookup('file', 'worker.ign') | string }}" security_groups: [] nics: "{{ [ 'port-name=' + os_port_worker + '-' + item.0|string ] + worker_nics }}" config_drive: yes with_indexed_items: "{{ worker_list }}" Insert the following content into a local file that is called additional-ports.yaml : Example 18.2. additional-ports.yaml # Build a list of worker nodes with indexes - name: 'Build worker list' set_fact: worker_list: "{{ worker_list | default([]) + [ item.1 + '-' + item.0 | string ] }}" with_indexed_items: "{{ [ os_compute_server_name ] * os_compute_nodes_number }}" # Ensure that each network specified in additionalNetworks exists - name: 'Verify additionalNetworks' os_networks_info: name: "{{ item.id }}" with_items: "{{ additionalNetworks }}" register: network_info # Expand additionalNetworks by the count parameter in each network definition - name: 'Build port and port index list for additionalNetworks' set_fact: port_list: "{{ port_list | default([]) + [ { 'net_name' : item.1.id, 'uuid' : network_info.results[item.0].openstack_networks[0].id, 'type' : item.1.type|default('normal'), 'security_groups' : item.1.security_groups|default(omit), 'port_security_enabled' : item.1.port_security_enabled|default(omit) } ] * item.1.count|default(1) }}" index_list: "{{ index_list | default([]) + range(item.1.count|default(1)) | list }}" with_indexed_items: "{{ additionalNetworks }}" # Calculate and save the name of the port # The format of the name is cluster_name-worker-workerID-networkUUID(partial)-count # i.e. fdp-nz995-worker-1-99bcd111-1 - name: 'Calculate port name' set_fact: port_name_list: "{{ port_name_list | default([]) + [ item.1 | combine( {'name' : item.1.uuid | regex_search('([^-]+)') + '-' + index_list[item.0]|string } ) ] }}" with_indexed_items: "{{ port_list }}" when: port_list is defined On a command line, run the compute-nodes.yaml playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml 18.8.24. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 18.8.25. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. The cluster is operational. Before you can configure it for SR-IOV networks though, you must perform additional tasks. 18.8.26. Preparing a cluster that runs on RHOSP for SR-IOV Before you use single root I/O virtualization (SR-IOV) on a cluster that runs on Red Hat OpenStack Platform (RHOSP), make the RHOSP metadata service mountable as a drive and enable the No-IOMMU Operator for the virtual function I/O (VFIO) driver. 18.8.26.1. Enabling the RHOSP metadata service as a mountable drive You can apply a machine config to your machine pool that makes the Red Hat OpenStack Platform (RHOSP) metadata service available as a mountable drive. The following machine config enables the display of RHOSP network UUIDs from within the SR-IOV Network Operator. This configuration simplifies the association of SR-IOV resources to cluster SR-IOV resources. Procedure Create a machine config file from the following template: A mountable metadata service machine config file kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target 1 You can substitute a name of your choice. From a command line, apply the machine config: USD oc apply -f <machine_config_file_name>.yaml 18.8.26.2. Enabling the No-IOMMU feature for the RHOSP VFIO driver You can apply a machine config to your machine pool that enables the No-IOMMU feature for the Red Hat OpenStack Platform (RHOSP) virtual function I/O (VFIO) driver. The RHOSP vfio-pci driver requires this feature. Procedure Create a machine config file from the following template: A No-IOMMU VFIO machine config file kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK 1 You can substitute a name of your choice. From a command line, apply the machine config: USD oc apply -f <machine_config_file_name>.yaml Note After you apply the machine config to the machine pool, you can watch the machine config pool status to see when the machines are available. The cluster is installed and prepared for SR-IOV configuration. You must now perform the SR-IOV configuration tasks in " steps". 18.8.27. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.8.28. Additional resources See Performance Addon Operator for low latency nodes for information about configuring your deployment for real-time running and low latency. 18.8.29. steps To complete SR-IOV configuration for your cluster: Install the Performance Addon Operator . Configure the Performance Addon Operator with huge pages support . Install the SR-IOV Operator . Configure your SR-IOV network device . Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 18.9. Installing a cluster on OpenStack in a restricted network In OpenShift Container Platform 4.10, you can install a cluster on Red Hat OpenStack Platform (RHOSP) in a restricted network by creating an internal mirror of the installation release content. 18.9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.10 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended host practices . You have the metadata service enabled in RHOSP. 18.9.2. About installations in restricted networks In OpenShift Container Platform 4.10, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 18.9.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 18.9.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 18.39. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 18.9.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.9.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 18.9.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 18.9.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.9.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 18.9.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 18.9.7. Setting cloud provider options Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options based on the cloud configuration specification. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #... 1 This property enables Octavia integration. 2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 4 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.1 and 16.2, this feature is only available for the Amphora provider. 5 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 7 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.1 and 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 18.9.7.1. External load balancers that use pre-defined floating IP addresses Commonly, Red Hat OpenStack Platform (RHOSP) deployments disallow non-administrator users from creating specific floating IP addresses. If such a policy is in place and you use a floating IP address in your service specification, the cloud provider will fail to handle IP address assignment to load balancers. If you use an external cloud provider, you can avoid this problem by pre-creating a floating IP address and specifying it in your service specification. The in-tree cloud provider does not support this method. Alternatively, you can modify the RHOSP Networking service (Neutron) to allow non-administrator users to create specific floating IP addresses . Additional resources For more information about cloud provider configuration, see OpenStack cloud provider options . 18.9.8. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network Red Hat OpenStack Platform (RHOSP) environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.10 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) image. Decompress the image. Note You must decompress the image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: Upload the image that you decompressed to a location that is accessible from the bastion server, like Glance. For example: Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 18.9.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible location. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . In the install-config.yaml file, set the value of platform.openstack.clusterOSImage to the image location or name. For example: platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 18.9.9.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note Kuryr installations default to HTTP proxies. Prerequisites For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter: USD ip route add <cluster_network_cidr> via <installer_subnet_gateway> The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates. You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 18.9.9.2. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 18.9.9.2.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 18.40. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 18.9.9.2.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 18.41. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 18.9.9.2.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 18.42. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 18.9.9.2.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 18.43. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 18.9.9.2.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 18.44. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 18.9.9.3. Sample customized install-config.yaml file for restricted OpenStack installations This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 18.9.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.9.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 18.9.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 18.9.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 18.9.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 18.9.13. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 18.9.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 18.9.15. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 18.9.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.9.17. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 18.10. OpenStack cloud configuration reference guide A cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). Use the following parameters in a cloud-provider configuration manifest file to configure your cluster. 18.10.1. OpenStack cloud provider options The cloud provider configuration, typically stored as a file named cloud.conf , controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). You can create a valid cloud.conf file by specifying the following options in it. 18.10.1.1. Global options The following options are used for RHOSP CCM authentication with the RHOSP Identity service, also known as Keystone. They are similiar to the global options that you can set by using the openstack CLI. Option Description auth-url The RHOSP Identity service URL. For example, http://128.110.154.166/identity . ca-file Optional. The CA certificate bundle file for communication with the RHOSP Identity service. If you use the HTTPS protocol with The Identity service URL, this option is required. domain-id The Identity service user domain ID. Leave this option unset if you are using Identity service application credentials. domain-name The Identity service user domain name. This option is not required if you set domain-id . tenant-id The Identity service project ID. Leave this option unset if you are using Identity service application credentials. In version 3 of the Identity API, which changed the identifier tenant to project , the value of tenant-id is automatically mapped to the project construct in the API. tenant-name The Identity service project name. username The Identity service user name. Leave this option unset if you are using Identity service application credentials. password The Identity service user password. Leave this option unset if you are using Identity service application credentials. region The Identity service region name. trust-id The Identity service trust ID. A trust represents the authorization of a user, or trustor, to delegate roles to another user, or trustee. Optionally, a trust authorizes the trustee to impersonate the trustor. You can find available trusts by querying the /v3/OS-TRUST/trusts endpoint of the Identity service API. 18.10.1.2. Load balancer options The cloud provider supports several load balancer options for deployments that use Octavia. Option Description use-octavia Whether or not to use Octavia for the LoadBalancer type of the service implementation rather than Neutron-LBaaS. The default value is true . floating-network-id Optional. The external network used to create floating IP addresses for load balancer virtual IP addresses (VIPs). If there are multiple external networks in the cloud, this option must be set or the user must specify loadbalancer.openstack.org/floating-network-id in the service annotation. lb-method The load balancing algorithm used to create the load balancer pool. For the Amphora provider the value can be ROUND_ROBIN , LEAST_CONNECTIONS , or SOURCE_IP . The default value is ROUND_ROBIN . For the OVN provider, only the SOURCE_IP_PORT algorithm is supported. For the Amphora provider, if using the LEAST_CONNECTIONS or SOURCE_IP methods, configure the create-monitor option as true in the cloud-provider-config config map on the openshift-config namespace and ETP:Local on the load-balancer type service to allow balancing algorithm enforcement in the client to service endpoint connections. lb-provider Optional. Used to specify the provider of the load balancer, for example, amphora or octavia . Only the Amphora and Octavia providers are supported. lb-version Optional. The load balancer API version. Only "v2" is supported. subnet-id The ID of the Networking service subnet on which load balancer VIPs are created. create-monitor Whether or not to create a health monitor for the service load balancer. A health monitor is required for services that declare externalTrafficPolicy: Local . The default value is false . This option is unsupported if you use RHOSP earlier than version 17 with the ovn provider. monitor-delay The interval in seconds by which probes are sent to members of the load balancer. The default value is 5 . monitor-max-retries The number of successful checks that are required to change the operating status of a load balancer member to ONLINE . The valid range is 1 to 10 , and the default value is 1 . monitor-timeout The time in seconds that a monitor waits to connect to the back end before it times out. The default value is 3 . 18.10.1.3. Metadata options Option Description search-order This configuration key affects the way that the provider retrieves metadata that relates to the instances in which it runs. The default value of configDrive,metadataService results in the provider retrieving instance metadata from the configuration drive first if available, and then the metadata service. Alternative values are: configDrive : Only retrieve instance metadata from the configuration drive. metadataService : Only retrieve instance metadata from the metadata service. metadataService,configDrive : Retrieve instance metadata from the metadata service first if available, and then retrieve instance metadata from the configuration drive. 18.11. Uninstalling a cluster on OpenStack You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP). 18.11.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites Have a copy of the installation program that you used to deploy the cluster. Have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 18.12. Uninstalling a cluster on RHOSP from your own infrastructure You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP) on user-provisioned infrastructure. 18.12.1. Downloading playbook dependencies The Ansible playbooks that simplify the removal process on user-provisioned infrastructure require several Python modules. On the machine where you will run the process, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 18.12.2. Removing a cluster from RHOSP that uses your own infrastructure You can remove an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) that uses your own infrastructure. To complete the removal process quickly, run several Ansible playbooks. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies." You have the playbooks that you used to install the cluster. You modified the playbooks that are prefixed with down- to reflect any changes that you made to their corresponding installation playbooks. For example, changes to the bootstrap.yaml file are reflected in the down-bootstrap.yaml file. All of the playbooks are in a common directory. Procedure On a command line, run the playbooks that you downloaded: USD ansible-playbook -i inventory.yaml \ down-bootstrap.yaml \ down-control-plane.yaml \ down-compute-nodes.yaml \ down-load-balancers.yaml \ down-network.yaml \ down-security-groups.yaml Remove any DNS record changes you made for the OpenShift Container Platform installation. OpenShift Container Platform is removed from your infrastructure.
[ "#!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog=\"USD(mktemp)\" san=\"USD(mktemp)\" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface==\"public\") | [USDname, .interface, .url] | join(\" \")' | sort > \"USDcatalog\" while read -r name interface url; do # Ignore HTTP if [[ USD{url#\"http://\"} != \"USDurl\" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#\"https://\"} # If the schema was not HTTPS, error if [[ \"USDnoschema\" == \"USDurl\" ]]; then echo \"ERROR (unknown schema): USDname USDinterface USDurl\" exit 2 fi # Remove the path and only keep host and port noschema=\"USD{noschema%%/*}\" host=\"USD{noschema%%:*}\" port=\"USD{noschema##*:}\" # Add the port if was implicit if [[ \"USDport\" == \"USDhost\" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName > \"USDsan\" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ \"USD(grep -c \"Subject Alternative Name\" \"USDsan\" || true)\" -gt 0 ]]; then echo \"PASS: USDname USDinterface USDurl\" else invalid=USD((invalid+1)) echo \"INVALID: USDname USDinterface USDurl\" fi done < \"USDcatalog\" clean up temporary files rm \"USDcatalog\" \"USDsan\" if [[ USDinvalid -gt 0 ]]; then echo \"USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field.\" exit 1 else echo \"All HTTPS certificates for this cloud are valid.\" fi", "x509: certificate relies on legacy Common Name field, use SANs instead", "openstack catalog list", "host=<host_name>", "port=<port_number>", "openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName", "X509v3 Subject Alternative Name: DNS:your.host.example.net", "x509: certificate relies on legacy Common Name field, use SANs instead", "openstack role add --user <user> --project <project> swiftoperator", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>", "oc apply -f <storage_class_file_name>", "storageclass.storage.k8s.io/custom-csi-storageclass created", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3", "oc apply -f <pvc_file_name>", "persistentvolumeclaim/csi-pvc-imageregistry created", "oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'", "config.imageregistry.operator.openshift.io/cluster patched", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "status: managementState: Managed pvc: claim: csi-pvc-imageregistry", "oc get pvc -n openshift-image-registry csi-pvc-imageregistry", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #", "oc edit configmap -n openshift-config cloud-provider-config", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3", "./openshift-install wait-for install-complete --log-level debug", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>", "(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml", "- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787", "(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose", "(undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml", "openstack project show <project>", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+", "source stackrc # Undercloud credentials", "openstack server list", "+--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+", "ssh [email protected]", "List of project IDs that are allowed to have Load balancer security groups belonging to them. amp_secgroup_allowed_projects = PROJECT_ID", "controller-0USD sudo docker restart octavia_worker", "openstack loadbalancer provider list", "+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+", "openstack role add --user <user> --project <project> swiftoperator", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #", "oc edit configmap -n openshift-config cloud-provider-config", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "ip route add <cluster_network_cidr> via <installer_subnet_gateway>", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-network-03-config.yml 1", "ls <installation_directory>/manifests/cluster-network-*", "cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "openstack role add --user <user> --project <project> swiftoperator", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "openstack network create radio --provider-physical-network radio --provider-network-type flat --external", "openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external", "openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio", "openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target", "oc apply -f <machine_config_file_name>.yaml", "kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK", "oc apply -f <machine_config_file_name>.yaml", "openstack role add --user <user> --project <project> swiftoperator", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "openstack network create radio --provider-physical-network radio --provider-network-type flat --external", "openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external", "openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio", "openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target", "oc apply -f <machine_config_file_name>.yaml", "kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK", "oc apply -f <machine_config_file_name>.yaml", "openstack network show <VFIO_network_name> -f value -c id", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-vhostuser-bind spec: config: ignition: version: 2.2.0 systemd: units: - name: vhostuser-bind.service enabled: true contents: | [Unit] Description=Vhostuser Interface vfio-pci Bind Wants=network-online.target After=network-online.target ignition-firstboot-complete.service [Service] Type=oneshot EnvironmentFile=/etc/vhostuser-bind.conf ExecStart=/usr/local/bin/vhostuser USDARG [Install] WantedBy=multi-user.target storage: files: - contents: inline: vfio-pci filesystem: root mode: 0644 path: /etc/modules-load.d/vfio-pci.conf - contents: inline: | #!/bin/bash set -e if [[ \"USD#\" -lt 1 ]]; then echo \"Nework ID not provided, nothing to do\" exit fi source /etc/vhostuser-bind.conf NW_DATA=\"/var/config/openstack/latest/network_data.json\" if [ ! -f USD{NW_DATA} ]; then echo \"Network data file not found, trying to download it from nova metadata\" if ! curl http://169.254.169.254/openstack/latest/network_data.json > /tmp/network_data.json; then echo \"Failed to download network data file\" exit 1 fi NW_DATA=\"/tmp/network_data.json\" fi function parseNetwork() { local nwid=USD1 local pcis=() echo \"Network ID is USDnwid\" links=USD(jq '.networks[] | select(.network_id == \"'USDnwid'\") | .link' USDNW_DATA) if [ USD{#links} -gt 0 ]; then for link in USDlinks; do echo \"Link Name: USDlink\" mac=USD(jq -r '.links[] | select(.id == 'USDlink') | .ethernet_mac_address' USDNW_DATA) if [ -n USDmac ]; then pci=USD(bindDriver USDmac) pci_ret=USD? if [[ \"USDpci_ret\" -eq 0 ]]; then echo \"USDpci bind succesful\" fi fi done fi } function bindDriver() { local mac=USD1 for file in /sys/class/net/*; do dev_mac=USD(cat USDfile/address) if [[ \"USDmac\" == \"USDdev_mac\" ]]; then name=USD{file##*\\/} bus_str=USD(ethtool -i USDname | grep bus) dev_t=USD{bus_str#*:} dev=USD{dev_t#[[:space:]]} echo USDdev devlink=\"/sys/bus/pci/devices/USDdev\" syspath=USD(realpath \"USDdevlink\") if [ ! -f \"USDsyspath/driver/unbind\" ]; then echo \"File USDsyspath/driver/unbind not found\" return 1 fi if ! echo \"USDdev\">\"USDsyspath/driver/unbind\"; then return 1 fi if [ ! -f \"USDsyspath/driver_override\" ]; then echo \"File USDsyspath/driver_override not found\" return 1 fi if ! echo \"vfio-pci\">\"USDsyspath/driver_override\"; then return 1 fi if [ ! -f \"/sys/bus/pci/drivers/vfio-pci/bind\" ]; then echo \"File /sys/bus/pci/drivers/vfio-pci/bind not found\" return 1 fi if ! echo \"USDdev\">\"/sys/bus/pci/drivers/vfio-pci/bind\"; then return 1 fi return 0 fi done return 1 } for nwid in \"USD@\"; do parseNetwork USDnwid done filesystem: root mode: 0744 path: /usr/local/bin/vhostuser - contents: inline: | ARG=\"be22563c-041e-44a0-9cbd-aa391b439a39,ec200105-fb85-4181-a6af-35816da6baf7\" 1 filesystem: root mode: 0644 path: /etc/vhostuser-bind.conf", "oc get nodes", "oc debug node/<node_name>", "chroot /host", "lspci -k", "00:07.0 Ethernet controller: Red Hat, Inc. Virtio network device Subsystem: Red Hat, Inc. Device 0001 Kernel driver in use: vfio-pci", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: vhostuser1 namespace: default spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"hostonly\", \"type\": \"host-device\", \"pciBusId\": \"0000:00:04.0\", \"ipam\": { } }'", "oc -n <your_cnf_namespace> get net-attach-def", "sudo subscription-manager register # If not done already", "sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already", "sudo subscription-manager repos --disable=* # If not done already", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr", "sudo alternatives --set python /usr/bin/python3", "xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-containers.yaml'", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "file <name_of_downloaded_file>", "openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"bootstrap machine\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "export INFRA_ID=USD(jq -r .infraID metadata.json)", "import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)", "openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>", "openstack image show <image_name>", "openstack catalog show image", "openstack token issue -c id -f value", "{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }", "for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done", "# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'", "# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'", "ansible-playbook -i inventory.yaml security-groups.yaml", "ansible-playbook -i inventory.yaml network.yaml", "openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"", "all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'", "./openshift-install wait-for install-complete --log-level debug", "ansible-playbook -i inventory.yaml bootstrap.yaml", "openstack console log show \"USDINFRA_ID-bootstrap\"", "ansible-playbook -i inventory.yaml control-plane.yaml", "openshift-install wait-for bootstrap-complete", "INFO API v1.23.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ansible-playbook -i inventory.yaml down-bootstrap.yaml", "ansible-playbook -i inventory.yaml compute-nodes.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0", "openshift-install --log-level debug wait-for install-complete", "sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>", "(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml", "- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787", "(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose", "(undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml", "openstack project show <project>", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+", "source stackrc # Undercloud credentials", "openstack server list", "+--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+", "ssh [email protected]", "List of project IDs that are allowed to have Load balancer security groups belonging to them. amp_secgroup_allowed_projects = PROJECT_ID", "controller-0USD sudo docker restart octavia_worker", "openstack loadbalancer provider list", "+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+", "sudo subscription-manager register # If not done already", "sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already", "sudo subscription-manager repos --disable=* # If not done already", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr", "sudo alternatives --set python /usr/bin/python3", "xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-containers.yaml'", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "file <name_of_downloaded_file>", "openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"bootstrap machine\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-network-03-config.yml 1", "ls <installation_directory>/manifests/cluster-network-*", "cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"networkType\"] = \"Kuryr\"; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "export INFRA_ID=USD(jq -r .infraID metadata.json)", "import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)", "openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>", "openstack image show <image_name>", "openstack catalog show image", "openstack token issue -c id -f value", "{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }", "for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done", "# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'", "# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'", "ansible-playbook -i inventory.yaml security-groups.yaml", "ansible-playbook -i inventory.yaml network.yaml", "openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"", "ansible-playbook -i inventory.yaml bootstrap.yaml", "openstack console log show \"USDINFRA_ID-bootstrap\"", "ansible-playbook -i inventory.yaml control-plane.yaml", "openshift-install wait-for bootstrap-complete", "INFO API v1.23.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ansible-playbook -i inventory.yaml down-bootstrap.yaml", "ansible-playbook -i inventory.yaml compute-nodes.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0", "openshift-install --log-level debug wait-for install-complete", "sudo subscription-manager register # If not done already", "sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already", "sudo subscription-manager repos --disable=* # If not done already", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr", "sudo alternatives --set python /usr/bin/python3", "xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-containers.yaml'", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "file <name_of_downloaded_file>", "openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"bootstrap machine\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "export INFRA_ID=USD(jq -r .infraID metadata.json)", "import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)", "openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>", "openstack image show <image_name>", "openstack catalog show image", "openstack token issue -c id -f value", "{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }", "for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done", "# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'", "# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'", "ansible-playbook -i inventory.yaml security-groups.yaml", "ansible-playbook -i inventory.yaml network.yaml", "openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"", "all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'", "./openshift-install wait-for install-complete --log-level debug", "ansible-playbook -i inventory.yaml bootstrap.yaml", "openstack console log show \"USDINFRA_ID-bootstrap\"", "ansible-playbook -i inventory.yaml control-plane.yaml", "openshift-install wait-for bootstrap-complete", "INFO API v1.23.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ansible-playbook -i inventory.yaml down-bootstrap.yaml", "openstack network create radio --provider-physical-network radio --provider-network-type flat --external", "openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external", "openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio", "openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink", ". If this value is non-empty, the corresponding floating IP will be attached to the bootstrap machine. This is needed for collecting logs in case of install failure. os_bootstrap_fip: '203.0.113.20' additionalNetworks: - id: radio count: 4 1 type: direct port_security_enabled: no - id: uplink count: 4 2 type: direct port_security_enabled: no", "- import_playbook: common.yaml - hosts: all gather_facts: no vars: worker_list: [] port_name_list: [] nic_list: [] tasks: # Create the SDN/primary port for each worker node - name: 'Create the Compute ports' os_port: name: \"{{ item.1 }}-{{ item.0 }}\" network: \"{{ os_network }}\" security_groups: - \"{{ os_sg_worker }}\" allowed_address_pairs: - ip_address: \"{{ os_ingressVIP }}\" with_indexed_items: \"{{ [os_port_worker] * os_compute_nodes_number }}\" register: ports # Tag each SDN/primary port with cluster name - name: 'Set Compute ports tag' command: cmd: \"openstack port set --tag {{ cluster_id_tag }} {{ item.1 }}-{{ item.0 }}\" with_indexed_items: \"{{ [os_port_worker] * os_compute_nodes_number }}\" - name: 'List the Compute Trunks' command: cmd: \"openstack network trunk list\" when: os_networking_type == \"Kuryr\" register: compute_trunks - name: 'Create the Compute trunks' command: cmd: \"openstack network trunk create --parent-port {{ item.1.id }} {{ os_compute_trunk_name }}-{{ item.0 }}\" with_indexed_items: \"{{ ports.results }}\" when: - os_networking_type == \"Kuryr\" - \"os_compute_trunk_name|string not in compute_trunks.stdout\" - name: 'Call additional-port processing' include_tasks: additional-ports.yaml # Create additional ports in OpenStack - name: 'Create additionalNetworks ports' os_port: name: \"{{ item.0 }}-{{ item.1.name }}\" vnic_type: \"{{ item.1.type }}\" network: \"{{ item.1.uuid }}\" port_security_enabled: \"{{ item.1.port_security_enabled|default(omit) }}\" no_security_groups: \"{{ 'true' if item.1.security_groups is not defined else omit }}\" security_groups: \"{{ item.1.security_groups | default(omit) }}\" with_nested: - \"{{ worker_list }}\" - \"{{ port_name_list }}\" # Tag the ports with the cluster info - name: 'Set additionalNetworks ports tag' command: cmd: \"openstack port set --tag {{ cluster_id_tag }} {{ item.0 }}-{{ item.1.name }}\" with_nested: - \"{{ worker_list }}\" - \"{{ port_name_list }}\" # Build the nic list to use for server create - name: Build nic list set_fact: nic_list: \"{{ nic_list | default([]) + [ item.name ] }}\" with_items: \"{{ port_name_list }}\" # Create the servers - name: 'Create the Compute servers' vars: worker_nics: \"{{ [ item.1 ] | product(nic_list) | map('join','-') | map('regex_replace', '(.*)', 'port-name=\\\\1') | list }}\" os_server: name: \"{{ item.1 }}\" image: \"{{ os_image_rhcos }}\" flavor: \"{{ os_flavor_worker }}\" auto_ip: no userdata: \"{{ lookup('file', 'worker.ign') | string }}\" security_groups: [] nics: \"{{ [ 'port-name=' + os_port_worker + '-' + item.0|string ] + worker_nics }}\" config_drive: yes with_indexed_items: \"{{ worker_list }}\"", "Build a list of worker nodes with indexes - name: 'Build worker list' set_fact: worker_list: \"{{ worker_list | default([]) + [ item.1 + '-' + item.0 | string ] }}\" with_indexed_items: \"{{ [ os_compute_server_name ] * os_compute_nodes_number }}\" Ensure that each network specified in additionalNetworks exists - name: 'Verify additionalNetworks' os_networks_info: name: \"{{ item.id }}\" with_items: \"{{ additionalNetworks }}\" register: network_info Expand additionalNetworks by the count parameter in each network definition - name: 'Build port and port index list for additionalNetworks' set_fact: port_list: \"{{ port_list | default([]) + [ { 'net_name' : item.1.id, 'uuid' : network_info.results[item.0].openstack_networks[0].id, 'type' : item.1.type|default('normal'), 'security_groups' : item.1.security_groups|default(omit), 'port_security_enabled' : item.1.port_security_enabled|default(omit) } ] * item.1.count|default(1) }}\" index_list: \"{{ index_list | default([]) + range(item.1.count|default(1)) | list }}\" with_indexed_items: \"{{ additionalNetworks }}\" Calculate and save the name of the port The format of the name is cluster_name-worker-workerID-networkUUID(partial)-count i.e. fdp-nz995-worker-1-99bcd111-1 - name: 'Calculate port name' set_fact: port_name_list: \"{{ port_name_list | default([]) + [ item.1 | combine( {'name' : item.1.uuid | regex_search('([^-]+)') + '-' + index_list[item.0]|string } ) ] }}\" with_indexed_items: \"{{ port_list }}\" when: port_list is defined", "ansible-playbook -i inventory.yaml compute-nodes.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0", "openshift-install --log-level debug wait-for install-complete", "kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target", "oc apply -f <machine_config_file_name>.yaml", "kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK", "oc apply -f <machine_config_file_name>.yaml", "openstack role add --user <user> --project <project> swiftoperator", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #", "oc edit configmap -n openshift-config cloud-provider-config", "file <name_of_downloaded_file>", "openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}", "./openshift-install create install-config --dir <installation_directory> 1", "platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "ip route add <cluster_network_cidr> via <installer_subnet_gateway>", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "sudo subscription-manager register # If not done already", "sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already", "sudo subscription-manager repos --disable=* # If not done already", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install python3-openstackclient ansible python3-openstacksdk", "sudo alternatives --set python /usr/bin/python3", "ansible-playbook -i inventory.yaml down-bootstrap.yaml down-control-plane.yaml down-compute-nodes.yaml down-load-balancers.yaml down-network.yaml down-security-groups.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/installing/installing-on-openstack
Chapter 7. keystone
Chapter 7. keystone The following chapter contains information about the configuration options in the keystone service. 7.1. keystone.conf This section contains options for the /etc/keystone/keystone.conf file. 7.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/keystone/keystone.conf file. . Configuration option = Default value Type Description admin_token = None string value Using this feature is NOT recommended. Instead, use the keystone-manage bootstrap command. The value of this option is treated as a "shared secret" that can be used to bootstrap Keystone through the API. This "token" does not represent a user (it has no identity), and carries no explicit authorization (it effectively bypasses most authorization checks). If set to None , the value is ignored and the admin_token middleware is effectively disabled. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = keystone string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_publisher_id = None string value Default publisher_id for outgoing notifications. If left undefined, Keystone will default to using the server's host name. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. insecure_debug = False boolean value If set to true, then the server will return information in HTTP responses that may allow an unauthenticated or authenticated user to get more information than normal, such as additional details about why authentication failed. This may be useful for debugging but is insecure. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. list_limit = None integer value The maximum number of entities that will be returned in a collection. This global limit may be then overridden for a specific driver, by specifying a list_limit in the appropriate section (for example, [assignment] ). No limit is set by default. In larger deployments, it is recommended that you set this to a reasonable number to prevent operations like listing all users and projects from placing an unnecessary load on the system. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_param_size = 64 integer value Limit the sizes of user & project ID/names. max_project_tree_depth = 5 integer value Maximum depth of the project hierarchy, excluding the project acting as a domain at the top of the hierarchy. WARNING: Setting it to a large value may adversely impact performance. max_token_size = 255 integer value Similar to [DEFAULT] max_param_size , but provides an exception for token values. With Fernet tokens, this can be set as low as 255. notification_format = cadf string value Define the notification format for identity service events. A basic notification only has information about the resource being operated on. A cadf notification has the same information, as well as information about the initiator of the event. The cadf option is entirely backwards compatible with the basic option, but is fully CADF-compliant, and is recommended for auditing use cases. notification_opt_out = ['identity.authenticate.success', 'identity.authenticate.pending', 'identity.authenticate.failed'] multi valued You can reduce the number of notifications keystone emits by explicitly opting out. Keystone will not emit notifications that match the patterns expressed in this list. Values are expected to be in the form of identity.<resource_type>.<operation> . By default, all notifications related to authentication are automatically suppressed. This field can be set multiple times in order to opt-out of multiple notification topics. For example, the following suppresses notifications describing user creation or successful authentication events: notification_opt_out=identity.user.create notification_opt_out=identity.authenticate.success public_endpoint = None uri value The base public endpoint URL for Keystone that is advertised to clients (NOTE: this does NOT affect how Keystone listens for connections). Defaults to the base host URL of the request. For example, if keystone receives a request to http://server:5000/v3/users , then this will option will be automatically treated as http://server:5000 . You should only need to set option if either the value of the base URL contains a path that keystone does not automatically infer ( /prefix/v3 ), or if the endpoint should be found on a different host. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_ping_enabled = False boolean value Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. strict_password_check = False boolean value If set to true, strict password length checking is performed for password manipulation. If a password exceeds the maximum length, the operation will fail with an HTTP 403 Forbidden error. If set to false, passwords are automatically truncated to the maximum length. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 7.1.2. application_credential The following table outlines the options available under the [application_credential] group in the /etc/keystone/keystone.conf file. Table 7.1. application_credential Configuration option = Default value Type Description cache_time = None integer value Time to cache application credential data in seconds. This has no effect unless global caching is enabled. caching = True boolean value Toggle for application credential caching. This has no effect unless global caching is enabled. driver = sql string value Entry point for the application credential backend driver in the keystone.application_credential namespace. Keystone only provides a sql driver, so there is no reason to change this unless you are providing a custom entry point. user_limit = -1 integer value Maximum number of application credentials a user is permitted to create. A value of -1 means unlimited. If a limit is not set, users are permitted to create application credentials at will, which could lead to bloat in the keystone database or open keystone to a DoS attack. 7.1.3. assignment The following table outlines the options available under the [assignment] group in the /etc/keystone/keystone.conf file. Table 7.2. assignment Configuration option = Default value Type Description driver = sql string value Entry point for the assignment backend driver (where role assignments are stored) in the keystone.assignment namespace. Only a SQL driver is supplied by keystone itself. Unless you are writing proprietary drivers for keystone, you do not need to set this option. prohibited_implied_role = ['admin'] list value A list of role names which are prohibited from being an implied role. 7.1.4. auth The following table outlines the options available under the [auth] group in the /etc/keystone/keystone.conf file. Table 7.3. auth Configuration option = Default value Type Description application_credential = None string value Entry point for the application_credential auth plugin module in the keystone.auth.application_credential namespace. You do not need to set this unless you are overriding keystone's own application_credential authentication plugin. external = None string value Entry point for the external ( REMOTE_USER ) auth plugin module in the keystone.auth.external namespace. Supplied drivers are DefaultDomain and Domain . The default driver is DefaultDomain , which assumes that all users identified by the username specified to keystone in the REMOTE_USER variable exist within the context of the default domain. The Domain option expects an additional environment variable be presented to keystone, REMOTE_DOMAIN , containing the domain name of the REMOTE_USER (if REMOTE_DOMAIN is not set, then the default domain will be used instead). You do not need to set this unless you are taking advantage of "external authentication", where the application server (such as Apache) is handling authentication instead of keystone. mapped = None string value Entry point for the mapped auth plugin module in the keystone.auth.mapped namespace. You do not need to set this unless you are overriding keystone's own mapped authentication plugin. methods = ['external', 'password', 'token', 'oauth1', 'mapped', 'application_credential'] list value Allowed authentication methods. Note: You should disable the external auth method if you are currently using federation. External auth and federation both use the REMOTE_USER variable. Since both the mapped and external plugin are being invoked to validate attributes in the request environment, it can cause conflicts. oauth1 = None string value Entry point for the OAuth 1.0a auth plugin module in the keystone.auth.oauth1 namespace. You do not need to set this unless you are overriding keystone's own oauth1 authentication plugin. password = None string value Entry point for the password auth plugin module in the keystone.auth.password namespace. You do not need to set this unless you are overriding keystone's own password authentication plugin. token = None string value Entry point for the token auth plugin module in the keystone.auth.token namespace. You do not need to set this unless you are overriding keystone's own token authentication plugin. 7.1.5. cache The following table outlines the options available under the [cache] group in the /etc/keystone/keystone.conf file. Table 7.4. cache Configuration option = Default value Type Description backend = dogpile.cache.null string value Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend. backend_argument = [] multi valued Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>". config_prefix = cache.oslo string value Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name. dead_timeout = 60 floating point value Time in seconds before attempting to add a node back in the pool in the HashClient's internal mechanisms. debug_cache_backend = False boolean value Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false. enable_retry_client = False boolean value Enable retry client mechanisms to handle failure. Those mechanisms can be used to wrap all kind of pymemcache clients. The wrapper allows you to define how many attempts to make and how long to wait between attemots. enable_socket_keepalive = False boolean value Global toggle for the socket keepalive of dogpile's pymemcache backend enabled = True boolean value Global toggle for caching. expiration_time = 600 integer value Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn't have an explicit cache expiration time defined for it. hashclient_retry_attempts = 2 integer value Amount of times a client should be tried before it is marked dead and removed from the pool in the HashClient's internal mechanisms. hashclient_retry_delay = 1 floating point value Time in seconds that should pass between retry attempts in the HashClient's internal mechanisms. memcache_dead_retry = 300 integer value Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). memcache_pool_connection_get_timeout = 10 integer value Number of seconds that an operation will wait to get a memcache client connection. memcache_pool_flush_on_reconnect = False boolean value Global toggle if memcache will be flushed on reconnect. (oslo_cache.memcache_pool backend only). memcache_pool_maxsize = 10 integer value Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only). memcache_pool_unused_timeout = 60 integer value Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only). memcache_servers = ['localhost:11211'] list value Memcache servers in the format of "host:port". (dogpile.cache.memcached and oslo_cache.memcache_pool backends only). If a given host refer to an IPv6 or a given domain refer to IPv6 then you should prefix the given address with the address family ( inet6 ) (e.g inet6[::1]:11211 , inet6:[fd12:3456:789a:1::1]:11211 , inet6:[controller-0.internalapi]:11211 ). If the address family is not given then default address family used will be inet which correspond to IPv4 memcache_socket_timeout = 1.0 floating point value Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). proxies = [] list value Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior. retry_attempts = 2 integer value Number of times to attempt an action before failing. retry_delay = 0 floating point value Number of seconds to sleep between each attempt. socket_keepalive_count = 1 integer value The maximum number of keepalive probes TCP should send before dropping the connection. Should be a positive integer greater than zero. socket_keepalive_idle = 1 integer value The time (in seconds) the connection needs to remain idle before TCP starts sending keepalive probes. Should be a positive integer most greater than zero. socket_keepalive_interval = 1 integer value The time (in seconds) between individual keepalive probes. Should be a positive integer greater than zero. tls_allowed_ciphers = None string value Set the available ciphers for sockets created with the TLS context. It should be a string in the OpenSSL cipher list format. If not specified, all OpenSSL enabled ciphers will be available. tls_cafile = None string value Path to a file of concatenated CA certificates in PEM format necessary to establish the caching servers' authenticity. If tls_enabled is False, this option is ignored. tls_certfile = None string value Path to a single file in PEM format containing the client's certificate as well as any number of CA certificates needed to establish the certificate's authenticity. This file is only required when client side authentication is necessary. If tls_enabled is False, this option is ignored. tls_enabled = False boolean value Global toggle for TLS usage when comunicating with the caching servers. tls_keyfile = None string value Path to a single file containing the client's private key in. Otherwhise the private key will be taken from the file specified in tls_certfile. If tls_enabled is False, this option is ignored. 7.1.6. catalog The following table outlines the options available under the [catalog] group in the /etc/keystone/keystone.conf file. Table 7.5. catalog Configuration option = Default value Type Description cache_time = None integer value Time to cache catalog data (in seconds). This has no effect unless global and catalog caching are both enabled. Catalog data (services, endpoints, etc.) typically does not change frequently, and so a longer duration than the global default may be desirable. caching = True boolean value Toggle for catalog caching. This has no effect unless global caching is enabled. In a typical deployment, there is no reason to disable this. driver = sql string value Entry point for the catalog driver in the keystone.catalog namespace. Keystone provides a sql option (which supports basic CRUD operations through SQL), a templated option (which loads the catalog from a templated catalog file on disk), and a endpoint_filter.sql option (which supports arbitrary service catalogs per project). list_limit = None integer value Maximum number of entities that will be returned in a catalog collection. There is typically no reason to set this, as it would be unusual for a deployment to have enough services or endpoints to exceed a reasonable limit. template_file = default_catalog.templates string value Absolute path to the file used for the templated catalog backend. This option is only used if the [catalog] driver is set to templated . 7.1.7. cors The following table outlines the options available under the [cors] group in the /etc/keystone/keystone.conf file. Table 7.6. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'X-Project-Id', 'X-Project-Name', 'X-Project-Domain-Id', 'X-Project-Domain-Name', 'X-Domain-Id', 'X-Domain-Name', 'Openstack-Auth-Receipt'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'Openstack-Auth-Receipt'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 7.1.8. credential The following table outlines the options available under the [credential] group in the /etc/keystone/keystone.conf file. Table 7.7. credential Configuration option = Default value Type Description auth_ttl = 15 integer value The length of time in minutes for which a signed EC2 or S3 token request is valid from the timestamp contained in the token request. cache_time = None integer value Time to cache credential data in seconds. This has no effect unless global caching is enabled. caching = True boolean value Toggle for caching only on retrieval of user credentials. This has no effect unless global caching is enabled. driver = sql string value Entry point for the credential backend driver in the keystone.credential namespace. Keystone only provides a sql driver, so there's no reason to change this unless you are providing a custom entry point. key_repository = /etc/keystone/credential-keys/ string value Directory containing Fernet keys used to encrypt and decrypt credentials stored in the credential backend. Fernet keys used to encrypt credentials have no relationship to Fernet keys used to encrypt Fernet tokens. Both sets of keys should be managed separately and require different rotation policies. Do not share this repository with the repository used to manage keys for Fernet tokens. provider = fernet string value Entry point for credential encryption and decryption operations in the keystone.credential.provider namespace. Keystone only provides a fernet driver, so there's no reason to change this unless you are providing a custom entry point to encrypt and decrypt credentials. user_limit = -1 integer value Maximum number of credentials a user is permitted to create. A value of -1 means unlimited. If a limit is not set, users are permitted to create credentials at will, which could lead to bloat in the keystone database or open keystone to a DoS attack. 7.1.9. database The following table outlines the options available under the [database] group in the /etc/keystone/keystone.conf file. Table 7.8. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 7.1.10. domain_config The following table outlines the options available under the [domain_config] group in the /etc/keystone/keystone.conf file. Table 7.9. domain_config Configuration option = Default value Type Description cache_time = 300 integer value Time-to-live (TTL, in seconds) to cache domain-specific configuration data. This has no effect unless [domain_config] caching is enabled. caching = True boolean value Toggle for caching of the domain-specific configuration backend. This has no effect unless global caching is enabled. There is normally no reason to disable this. driver = sql string value Entry point for the domain-specific configuration driver in the keystone.resource.domain_config namespace. Only a sql option is provided by keystone, so there is no reason to set this unless you are providing a custom entry point. 7.1.11. endpoint_filter The following table outlines the options available under the [endpoint_filter] group in the /etc/keystone/keystone.conf file. Table 7.10. endpoint_filter Configuration option = Default value Type Description driver = sql string value Entry point for the endpoint filter driver in the keystone.endpoint_filter namespace. Only a sql option is provided by keystone, so there is no reason to set this unless you are providing a custom entry point. return_all_endpoints_if_no_filter = True boolean value This controls keystone's behavior if the configured endpoint filters do not result in any endpoints for a user + project pair (and therefore a potentially empty service catalog). If set to true, keystone will return the entire service catalog. If set to false, keystone will return an empty service catalog. 7.1.12. endpoint_policy The following table outlines the options available under the [endpoint_policy] group in the /etc/keystone/keystone.conf file. Table 7.11. endpoint_policy Configuration option = Default value Type Description driver = sql string value Entry point for the endpoint policy driver in the keystone.endpoint_policy namespace. Only a sql driver is provided by keystone, so there is no reason to set this unless you are providing a custom entry point. 7.1.13. eventlet_server The following table outlines the options available under the [eventlet_server] group in the /etc/keystone/keystone.conf file. Table 7.12. eventlet_server Configuration option = Default value Type Description admin_bind_host = 0.0.0.0 host address value The IP address of the network interface for the admin service to listen on. Deprecated since: K *Reason:*Support for running keystone under eventlet has been removed in the Newton release. These options remain for backwards compatibility because they are used for URL substitutions. admin_port = 35357 port value The port number for the admin service to listen on. Deprecated since: K *Reason:*Support for running keystone under eventlet has been removed in the Newton release. These options remain for backwards compatibility because they are used for URL substitutions. public_bind_host = 0.0.0.0 host address value The IP address of the network interface for the public service to listen on. Deprecated since: K *Reason:*Support for running keystone under eventlet has been removed in the Newton release. These options remain for backwards compatibility because they are used for URL substitutions. public_port = 5000 port value The port number for the public service to listen on. Deprecated since: K *Reason:*Support for running keystone under eventlet has been removed in the Newton release. These options remain for backwards compatibility because they are used for URL substitutions. 7.1.14. federation The following table outlines the options available under the [federation] group in the /etc/keystone/keystone.conf file. Table 7.13. federation Configuration option = Default value Type Description `assertion_prefix = ` string value Prefix to use when filtering environment variable names for federated assertions. Matched variables are passed into the federated mapping engine. caching = True boolean value Toggle for federation caching. This has no effect unless global caching is enabled. There is typically no reason to disable this. default_authorization_ttl = 0 integer value Default time in minutes for the validity of group memberships carried over from a mapping. Default is 0, which means disabled. driver = sql string value Entry point for the federation backend driver in the keystone.federation namespace. Keystone only provides a sql driver, so there is no reason to set this option unless you are providing a custom entry point. federated_domain_name = Federated string value An arbitrary domain name that is reserved to allow federated ephemeral users to have a domain concept. Note that an admin will not be able to create a domain with this name or update an existing domain to this name. You are not advised to change this value unless you really have to. Deprecated since: T *Reason:*This option has been superseded by ephemeral users existing in the domain of their identity provider. remote_id_attribute = None string value Default value for all protocols to be used to obtain the entity ID of the Identity Provider from the environment. For mod_shib , this would be Shib-Identity-Provider . For mod_auth_openidc , this could be HTTP_OIDC_ISS . For mod_auth_mellon , this could be MELLON_IDP . This can be overridden on a per-protocol basis by providing a remote_id_attribute to the federation protocol using the API. sso_callback_template = /etc/keystone/sso_callback_template.html string value Absolute path to an HTML file used as a Single Sign-On callback handler. This page is expected to redirect the user from keystone back to a trusted dashboard host, by form encoding a token in a POST request. Keystone's default value should be sufficient for most deployments. trusted_dashboard = [] multi valued A list of trusted dashboard hosts. Before accepting a Single Sign-On request to return a token, the origin host must be a member of this list. This configuration option may be repeated for multiple values. You must set this in order to use web-based SSO flows. For example: trusted_dashboard=https://acme.example.com/auth/websso trusted_dashboard=https://beta.example.com/auth/websso 7.1.15. fernet_receipts The following table outlines the options available under the [fernet_receipts] group in the /etc/keystone/keystone.conf file. Table 7.14. fernet_receipts Configuration option = Default value Type Description key_repository = /etc/keystone/fernet-keys/ string value Directory containing Fernet receipt keys. This directory must exist before using keystone-manage fernet_setup for the first time, must be writable by the user running keystone-manage fernet_setup or keystone-manage fernet_rotate , and of course must be readable by keystone's server process. The repository may contain keys in one of three states: a single staged key (always index 0) used for receipt validation, a single primary key (always the highest index) used for receipt creation and validation, and any number of secondary keys (all other index values) used for receipt validation. With multiple keystone nodes, each node must share the same key repository contents, with the exception of the staged key (index 0). It is safe to run keystone-manage fernet_rotate once on any one node to promote a staged key (index 0) to be the new primary (incremented from the highest index), and produce a new staged key (a new key with index 0); the resulting repository can then be atomically replicated to other nodes without any risk of race conditions (for example, it is safe to run keystone-manage fernet_rotate on host A, wait any amount of time, create a tarball of the directory on host A, unpack it on host B to a temporary location, and atomically move ( mv ) the directory into place on host B). Running keystone-manage fernet_rotate twice on a key repository without syncing other nodes will result in receipts that can not be validated by all nodes. max_active_keys = 3 integer value This controls how many keys are held in rotation by keystone-manage fernet_rotate before they are discarded. The default value of 3 means that keystone will maintain one staged key (always index 0), one primary key (the highest numerical index), and one secondary key (every other index). Increasing this value means that additional secondary keys will be kept in the rotation. 7.1.16. fernet_tokens The following table outlines the options available under the [fernet_tokens] group in the /etc/keystone/keystone.conf file. Table 7.15. fernet_tokens Configuration option = Default value Type Description key_repository = /etc/keystone/fernet-keys/ string value Directory containing Fernet token keys. This directory must exist before using keystone-manage fernet_setup for the first time, must be writable by the user running keystone-manage fernet_setup or keystone-manage fernet_rotate , and of course must be readable by keystone's server process. The repository may contain keys in one of three states: a single staged key (always index 0) used for token validation, a single primary key (always the highest index) used for token creation and validation, and any number of secondary keys (all other index values) used for token validation. With multiple keystone nodes, each node must share the same key repository contents, with the exception of the staged key (index 0). It is safe to run keystone-manage fernet_rotate once on any one node to promote a staged key (index 0) to be the new primary (incremented from the highest index), and produce a new staged key (a new key with index 0); the resulting repository can then be atomically replicated to other nodes without any risk of race conditions (for example, it is safe to run keystone-manage fernet_rotate on host A, wait any amount of time, create a tarball of the directory on host A, unpack it on host B to a temporary location, and atomically move ( mv ) the directory into place on host B). Running keystone-manage fernet_rotate twice on a key repository without syncing other nodes will result in tokens that can not be validated by all nodes. max_active_keys = 3 integer value This controls how many keys are held in rotation by keystone-manage fernet_rotate before they are discarded. The default value of 3 means that keystone will maintain one staged key (always index 0), one primary key (the highest numerical index), and one secondary key (every other index). Increasing this value means that additional secondary keys will be kept in the rotation. 7.1.17. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/keystone/keystone.conf file. Table 7.16. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. path = /healthcheck string value The path to respond to healtcheck requests on. 7.1.18. identity The following table outlines the options available under the [identity] group in the /etc/keystone/keystone.conf file. Table 7.17. identity Configuration option = Default value Type Description cache_time = 600 integer value Time to cache identity data (in seconds). This has no effect unless global and identity caching are enabled. caching = True boolean value Toggle for identity caching. This has no effect unless global caching is enabled. There is typically no reason to disable this. default_domain_id = default string value This references the domain to use for all Identity API v2 requests (which are not aware of domains). A domain with this ID can optionally be created for you by keystone-manage bootstrap . The domain referenced by this ID cannot be deleted on the v3 API, to prevent accidentally breaking the v2 API. There is nothing special about this domain, other than the fact that it must exist to order to maintain support for your v2 clients. There is typically no reason to change this value. domain_config_dir = /etc/keystone/domains string value Absolute path where keystone should locate domain-specific [identity] configuration files. This option has no effect unless [identity] domain_specific_drivers_enabled is set to true. There is typically no reason to change this value. domain_configurations_from_database = False boolean value By default, domain-specific configuration data is read from files in the directory identified by [identity] domain_config_dir . Enabling this configuration option allows you to instead manage domain-specific configurations through the API, which are then persisted in the backend (typically, a SQL database), rather than using configuration files on disk. domain_specific_drivers_enabled = False boolean value A subset (or all) of domains can have their own identity driver, each with their own partial configuration options, stored in either the resource backend or in a file in a domain configuration directory (depending on the setting of [identity] domain_configurations_from_database ). Only values specific to the domain need to be specified in this manner. This feature is disabled by default, but may be enabled by default in a future release; set to true to enable. driver = sql string value Entry point for the identity backend driver in the keystone.identity namespace. Keystone provides a sql and ldap driver. This option is also used as the default driver selection (along with the other configuration variables in this section) in the event that [identity] domain_specific_drivers_enabled is enabled, but no applicable domain-specific configuration is defined for the domain in question. Unless your deployment primarily relies on ldap AND is not using domain-specific configuration, you should typically leave this set to sql . list_limit = None integer value Maximum number of entities that will be returned in an identity collection. max_password_length = 4096 integer value Maximum allowed length for user passwords. Decrease this value to improve performance. Changing this value does not effect existing passwords. This value can also be overridden by certain hashing algorithms maximum allowed length which takes precedence over the configured value. The bcrypt max_password_length is 72 bytes. password_hash_algorithm = bcrypt string value The password hashing algorithm to use for passwords stored within keystone. password_hash_rounds = None integer value This option represents a trade off between security and performance. Higher values lead to slower performance, but higher security. Changing this option will only affect newly created passwords as existing password hashes already have a fixed number of rounds applied, so it is safe to tune this option in a running cluster. The default for bcrypt is 12, must be between 4 and 31, inclusive. The default for scrypt is 16, must be within range(1,32) . The default for pbkdf_sha512 is 60000, must be within range(1,1<<32) WARNING: If using scrypt, increasing this value increases BOTH time AND memory requirements to hash a password. salt_bytesize = None integer value Number of bytes to use in scrypt and pbkfd2_sha512 hashing salt. Default for scrypt is 16 bytes. Default for pbkfd2_sha512 is 16 bytes. Limited to a maximum of 96 bytes due to the size of the column used to store password hashes. scrypt_block_size = None integer value Optional block size to pass to scrypt hash function (the r parameter). Useful for tuning scrypt to optimal performance for your CPU architecture. This option is only used when the password_hash_algorithm option is set to scrypt . Defaults to 8. scrypt_parallelism = None integer value Optional parallelism to pass to scrypt hash function (the p parameter). This option is only used when the password_hash_algorithm option is set to scrypt . Defaults to 1. 7.1.19. identity_mapping The following table outlines the options available under the [identity_mapping] group in the /etc/keystone/keystone.conf file. Table 7.18. identity_mapping Configuration option = Default value Type Description backward_compatible_ids = True boolean value The format of user and group IDs changed in Juno for backends that do not generate UUIDs (for example, LDAP), with keystone providing a hash mapping to the underlying attribute in LDAP. By default this mapping is disabled, which ensures that existing IDs will not change. Even when the mapping is enabled by using domain-specific drivers ( [identity] domain_specific_drivers_enabled ), any users and groups from the default domain being handled by LDAP will still not be mapped to ensure their IDs remain backward compatible. Setting this value to false will enable the new mapping for all backends, including the default LDAP driver. It is only guaranteed to be safe to enable this option if you do not already have assignments for users and groups from the default LDAP domain, and you consider it to be acceptable for Keystone to provide the different IDs to clients than it did previously (existing IDs in the API will suddenly change). Typically this means that the only time you can set this value to false is when configuring a fresh installation, although that is the recommended value. driver = sql string value Entry point for the identity mapping backend driver in the keystone.identity.id_mapping namespace. Keystone only provides a sql driver, so there is no reason to change this unless you are providing a custom entry point. generator = sha256 string value Entry point for the public ID generator for user and group entities in the keystone.identity.id_generator namespace. The Keystone identity mapper only supports generators that produce 64 bytes or less. Keystone only provides a sha256 entry point, so there is no reason to change this value unless you're providing a custom entry point. 7.1.20. jwt_tokens The following table outlines the options available under the [jwt_tokens] group in the /etc/keystone/keystone.conf file. Table 7.19. jwt_tokens Configuration option = Default value Type Description jws_private_key_repository = /etc/keystone/jws-keys/private string value Directory containing private keys for signing JWS tokens. This directory must exist in order for keystone's server process to start. It must also be readable by keystone's server process. It must contain at least one private key that corresponds to a public key in keystone.conf [jwt_tokens] jws_public_key_repository . In the event there are multiple private keys in this directory, keystone will use a key named private.pem to sign tokens. In the future, keystone may support the ability to sign tokens with multiple private keys. For now, only a key named private.pem within this directory is required to issue JWS tokens. This option is only applicable in deployments issuing JWS tokens and setting keystone.conf [token] provider = jws . jws_public_key_repository = /etc/keystone/jws-keys/public string value Directory containing public keys for validating JWS token signatures. This directory must exist in order for keystone's server process to start. It must also be readable by keystone's server process. It must contain at least one public key that corresponds to a private key in keystone.conf [jwt_tokens] jws_private_key_repository . This option is only applicable in deployments issuing JWS tokens and setting keystone.conf [token] provider = jws . 7.1.21. ldap The following table outlines the options available under the [ldap] group in the /etc/keystone/keystone.conf file. Table 7.20. ldap Configuration option = Default value Type Description alias_dereferencing = default string value The LDAP dereferencing option to use for queries involving aliases. A value of default falls back to using default dereferencing behavior configured by your ldap.conf . A value of never prevents aliases from being dereferenced at all. A value of searching dereferences aliases only after name resolution. A value of finding dereferences aliases only during name resolution. A value of always dereferences aliases in all cases. auth_pool_connection_lifetime = 60 integer value The maximum end user authentication connection lifetime to the LDAP server in seconds. When this lifetime is exceeded, the connection will be unbound and removed from the connection pool. This option has no effect unless [ldap] use_auth_pool is also enabled. auth_pool_size = 100 integer value The size of the connection pool to use for end user authentication. This option has no effect unless [ldap] use_auth_pool is also enabled. chase_referrals = None boolean value Sets keystone's referral chasing behavior across directory partitions. If left unset, the system's default behavior will be used. connection_timeout = -1 integer value The connection timeout to use with the LDAP server. A value of -1 means that connections will never timeout. debug_level = None integer value Sets the LDAP debugging level for LDAP calls. A value of 0 means that debugging is not enabled. This value is a bitmask, consult your LDAP documentation for possible values. group_ad_nesting = False boolean value If enabled, group queries will use Active Directory specific filters for nested groups. group_additional_attribute_mapping = [] list value A list of LDAP attribute to keystone group attribute pairs used for mapping additional attributes to groups in keystone. The expected format is <ldap_attr>:<group_attr> , where ldap_attr is the attribute in the LDAP object and group_attr is the attribute which should appear in the identity API. group_attribute_ignore = [] list value List of group attributes to ignore on create and update. or whether a specific group attribute should be filtered for list or show group. group_desc_attribute = description string value The LDAP attribute mapped to group descriptions in keystone. group_filter = None string value The LDAP search filter to use for groups. group_id_attribute = cn string value The LDAP attribute mapped to group IDs in keystone. This must NOT be a multivalued attribute. Group IDs are expected to be globally unique across keystone domains and URL-safe. group_member_attribute = member string value The LDAP attribute used to indicate that a user is a member of the group. group_members_are_ids = False boolean value Enable this option if the members of the group object class are keystone user IDs rather than LDAP DNs. This is the case when using posixGroup as the group object class in Open Directory. group_name_attribute = ou string value The LDAP attribute mapped to group names in keystone. Group names are expected to be unique only within a keystone domain and are not expected to be URL-safe. group_objectclass = groupOfNames string value The LDAP object class to use for groups. If setting this option to posixGroup , you may also be interested in enabling the [ldap] group_members_are_ids option. group_tree_dn = None string value The search base to use for groups. Defaults to ou=UserGroups with the [ldap] suffix appended to it. page_size = 0 integer value Defines the maximum number of results per page that keystone should request from the LDAP server when listing objects. A value of zero ( 0 ) disables paging. password = None string value The password of the administrator bind DN to use when querying the LDAP server, if your LDAP server requires it. pool_connection_lifetime = 600 integer value The maximum connection lifetime to the LDAP server in seconds. When this lifetime is exceeded, the connection will be unbound and removed from the connection pool. This option has no effect unless [ldap] use_pool is also enabled. pool_connection_timeout = -1 integer value The connection timeout to use when pooling LDAP connections. A value of -1 means that connections will never timeout. This option has no effect unless [ldap] use_pool is also enabled. pool_retry_delay = 0.1 floating point value The number of seconds to wait before attempting to reconnect to the LDAP server. This option has no effect unless [ldap] use_pool is also enabled. pool_retry_max = 3 integer value The maximum number of times to attempt reconnecting to the LDAP server before aborting. A value of zero prevents retries. This option has no effect unless [ldap] use_pool is also enabled. pool_size = 10 integer value The size of the LDAP connection pool. This option has no effect unless [ldap] use_pool is also enabled. query_scope = one string value The search scope which defines how deep to search within the search base. A value of one (representing oneLevel or singleLevel ) indicates a search of objects immediately below to the base object, but does not include the base object itself. A value of sub (representing subtree or wholeSubtree ) indicates a search of both the base object itself and the entire subtree below it. randomize_urls = False boolean value Randomize the order of URLs in each keystone process. This makes the failure behavior more gradual, since if the first server is down, a process/thread will wait for the specified timeout before attempting a connection to a server further down the list. This defaults to False, for backward compatibility. suffix = cn=example,cn=com string value The default LDAP server suffix to use, if a DN is not defined via either [ldap] user_tree_dn or [ldap] group_tree_dn . tls_cacertdir = None string value An absolute path to a CA certificate directory to use when communicating with LDAP servers. There is no reason to set this option if you've also set [ldap] tls_cacertfile . tls_cacertfile = None string value An absolute path to a CA certificate file to use when communicating with LDAP servers. This option will take precedence over [ldap] tls_cacertdir , so there is no reason to set both. tls_req_cert = demand string value Specifies which checks to perform against client certificates on incoming TLS sessions. If set to demand , then a certificate will always be requested and required from the LDAP server. If set to allow , then a certificate will always be requested but not required from the LDAP server. If set to never , then a certificate will never be requested. url = ldap://localhost string value URL(s) for connecting to the LDAP server. Multiple LDAP URLs may be specified as a comma separated string. The first URL to successfully bind is used for the connection. use_auth_pool = True boolean value Enable LDAP connection pooling for end user authentication. There is typically no reason to disable this. use_pool = True boolean value Enable LDAP connection pooling for queries to the LDAP server. There is typically no reason to disable this. use_tls = False boolean value Enable TLS when communicating with LDAP servers. You should also set the [ldap] tls_cacertfile and [ldap] tls_cacertdir options when using this option. Do not set this option if you are using LDAP over SSL (LDAPS) instead of TLS. user = None string value The user name of the administrator bind DN to use when querying the LDAP server, if your LDAP server requires it. user_additional_attribute_mapping = [] list value A list of LDAP attribute to keystone user attribute pairs used for mapping additional attributes to users in keystone. The expected format is <ldap_attr>:<user_attr> , where ldap_attr is the attribute in the LDAP object and user_attr is the attribute which should appear in the identity API. user_attribute_ignore = ['default_project_id'] list value List of user attributes to ignore on create and update, or whether a specific user attribute should be filtered for list or show user. user_default_project_id_attribute = None string value The LDAP attribute mapped to a user's default_project_id in keystone. This is most commonly used when keystone has write access to LDAP. user_description_attribute = description string value The LDAP attribute mapped to user descriptions in keystone. user_enabled_attribute = enabled string value The LDAP attribute mapped to the user enabled attribute in keystone. If setting this option to userAccountControl , then you may be interested in setting [ldap] user_enabled_mask and [ldap] user_enabled_default as well. user_enabled_default = True string value The default value to enable users. This should match an appropriate integer value if the LDAP server uses non-boolean (bitmask) values to indicate if a user is enabled or disabled. If this is not set to True , then the typical value is 512 . This is typically used when [ldap] user_enabled_attribute = userAccountControl . user_enabled_emulation = False boolean value If enabled, keystone uses an alternative method to determine if a user is enabled or not by checking if they are a member of the group defined by the [ldap] user_enabled_emulation_dn option. Enabling this option causes keystone to ignore the value of [ldap] user_enabled_invert . user_enabled_emulation_dn = None string value DN of the group entry to hold enabled users when using enabled emulation. Setting this option has no effect unless [ldap] user_enabled_emulation is also enabled. user_enabled_emulation_use_group_config = False boolean value Use the [ldap] group_member_attribute and [ldap] group_objectclass settings to determine membership in the emulated enabled group. Enabling this option has no effect unless [ldap] user_enabled_emulation is also enabled. user_enabled_invert = False boolean value Logically negate the boolean value of the enabled attribute obtained from the LDAP server. Some LDAP servers use a boolean lock attribute where "true" means an account is disabled. Setting [ldap] user_enabled_invert = true will allow these lock attributes to be used. This option will have no effect if either the [ldap] user_enabled_mask or [ldap] user_enabled_emulation options are in use. user_enabled_mask = 0 integer value Bitmask integer to select which bit indicates the enabled value if the LDAP server represents "enabled" as a bit on an integer rather than as a discrete boolean. A value of 0 indicates that the mask is not used. If this is not set to 0 the typical value is 2 . This is typically used when [ldap] user_enabled_attribute = userAccountControl . Setting this option causes keystone to ignore the value of [ldap] user_enabled_invert . user_filter = None string value The LDAP search filter to use for users. user_id_attribute = cn string value The LDAP attribute mapped to user IDs in keystone. This must NOT be a multivalued attribute. User IDs are expected to be globally unique across keystone domains and URL-safe. user_mail_attribute = mail string value The LDAP attribute mapped to user emails in keystone. user_name_attribute = sn string value The LDAP attribute mapped to user names in keystone. User names are expected to be unique only within a keystone domain and are not expected to be URL-safe. user_objectclass = inetOrgPerson string value The LDAP object class to use for users. user_pass_attribute = userPassword string value The LDAP attribute mapped to user passwords in keystone. user_tree_dn = None string value The search base to use for users. Defaults to ou=Users with the [ldap] suffix appended to it. 7.1.22. memcache The following table outlines the options available under the [memcache] group in the /etc/keystone/keystone.conf file. Table 7.21. memcache Configuration option = Default value Type Description dead_retry = 300 integer value Number of seconds memcached server is considered dead before it is tried again. This is used by the key value store system. pool_connection_get_timeout = 10 integer value Number of seconds that an operation will wait to get a memcache client connection. This is used by the key value store system. pool_maxsize = 10 integer value Max total number of open connections to every memcached server. This is used by the key value store system. pool_unused_timeout = 60 integer value Number of seconds a connection to memcached is held unused in the pool before it is closed. This is used by the key value store system. socket_timeout = 3 integer value Timeout in seconds for every call to a server. This is used by the key value store system. Deprecated since: T *Reason:*This option is duplicated with oslo.cache. Configure ``keystone.conf [cache] memcache_socket_timeout`` option to set the socket_timeout of memcached instead. 7.1.23. oauth1 The following table outlines the options available under the [oauth1] group in the /etc/keystone/keystone.conf file. Table 7.22. oauth1 Configuration option = Default value Type Description access_token_duration = 86400 integer value Number of seconds for the OAuth Access Token to remain valid after being created. This is the amount of time the consumer has to interact with the service provider (which is typically keystone). Setting this option to zero means that access tokens will last forever. driver = sql string value Entry point for the OAuth backend driver in the keystone.oauth1 namespace. Typically, there is no reason to set this option unless you are providing a custom entry point. request_token_duration = 28800 integer value Number of seconds for the OAuth Request Token to remain valid after being created. This is the amount of time the user has to authorize the token. Setting this option to zero means that request tokens will last forever. 7.1.24. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/keystone/keystone.conf file. Table 7.23. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 7.1.25. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/keystone/keystone.conf file. Table 7.24. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate `ssl_client_cert_file = ` string value Client certificate PEM file used for authentication. `ssl_client_key_file = ` string value Client key PEM file used for authentication. `ssl_client_key_password = ` string value Client key password file used for authentication. 7.1.26. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/keystone/keystone.conf file. Table 7.25. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 7.1.27. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/keystone/keystone.conf file. Table 7.26. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. This option should be set to True only for the wsgi services. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 7.1.28. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/keystone/keystone.conf file. Table 7.27. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. max_request_body_size = 114688 integer value The maximum body size for each request, in bytes. secure_proxy_ssl_header = X-Forwarded-Proto string value The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. 7.1.29. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/keystone/keystone.conf file. Table 7.28. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 7.1.30. policy The following table outlines the options available under the [policy] group in the /etc/keystone/keystone.conf file. Table 7.29. policy Configuration option = Default value Type Description driver = sql string value Entry point for the policy backend driver in the keystone.policy namespace. Supplied drivers are rules (which does not support any CRUD operations for the v3 policy API) and sql . Typically, there is no reason to set this option unless you are providing a custom entry point. list_limit = None integer value Maximum number of entities that will be returned in a policy collection. 7.1.31. profiler The following table outlines the options available under the [profiler] group in the /etc/keystone/keystone.conf file. Table 7.30. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 7.1.32. receipt The following table outlines the options available under the [receipt] group in the /etc/keystone/keystone.conf file. Table 7.31. receipt Configuration option = Default value Type Description cache_on_issue = True boolean value Enable storing issued receipt data to receipt validation cache so that first receipt validation doesn't actually cause full validation cycle. This option has no effect unless global caching and receipt caching are enabled. cache_time = 300 integer value The number of seconds to cache receipt creation and validation data. This has no effect unless both global and [receipt] caching are enabled. caching = True boolean value Toggle for caching receipt creation and validation data. This has no effect unless global caching is enabled, or if cache_on_issue is disabled as we only cache receipts on issue. expiration = 300 integer value The amount of time that a receipt should remain valid (in seconds). This value should always be very short, as it represents how long a user has to reattempt auth with the missing auth methods. provider = fernet string value Entry point for the receipt provider in the keystone.receipt.provider namespace. The receipt provider controls the receipt construction and validation operations. Keystone includes just the fernet receipt provider for now. fernet receipts do not need to be persisted at all, but require that you run keystone-manage fernet_setup (also see the keystone-manage fernet_rotate command). 7.1.33. resource The following table outlines the options available under the [resource] group in the /etc/keystone/keystone.conf file. Table 7.32. resource Configuration option = Default value Type Description admin_project_domain_name = None string value Name of the domain that owns the admin_project_name . If left unset, then there is no admin project. [resource] admin_project_name must also be set to use this option. admin_project_name = None string value This is a special project which represents cloud-level administrator privileges across services. Tokens scoped to this project will contain a true is_admin_project attribute to indicate to policy systems that the role assignments on that specific project should apply equally across every project. If left unset, then there is no admin project, and thus no explicit means of cross-project role assignments. [resource] admin_project_domain_name must also be set to use this option. cache_time = None integer value Time to cache resource data in seconds. This has no effect unless global caching is enabled. caching = True boolean value Toggle for resource caching. This has no effect unless global caching is enabled. domain_name_url_safe = off string value This controls whether the names of domains are restricted from containing URL-reserved characters. If set to new , attempts to create or update a domain with a URL-unsafe name will fail. If set to strict , attempts to scope a token with a URL-unsafe domain name will fail, thereby forcing all domain names to be updated to be URL-safe. driver = sql string value Entry point for the resource driver in the keystone.resource namespace. Only a sql driver is supplied by keystone. Unless you are writing proprietary drivers for keystone, you do not need to set this option. list_limit = None integer value Maximum number of entities that will be returned in a resource collection. project_name_url_safe = off string value This controls whether the names of projects are restricted from containing URL-reserved characters. If set to new , attempts to create or update a project with a URL-unsafe name will fail. If set to strict , attempts to scope a token with a URL-unsafe project name will fail, thereby forcing all project names to be updated to be URL-safe. 7.1.34. revoke The following table outlines the options available under the [revoke] group in the /etc/keystone/keystone.conf file. Table 7.33. revoke Configuration option = Default value Type Description cache_time = 3600 integer value Time to cache the revocation list and the revocation events (in seconds). This has no effect unless global and [revoke] caching are both enabled. caching = True boolean value Toggle for revocation event caching. This has no effect unless global caching is enabled. driver = sql string value Entry point for the token revocation backend driver in the keystone.revoke namespace. Keystone only provides a sql driver, so there is no reason to set this option unless you are providing a custom entry point. expiration_buffer = 1800 integer value The number of seconds after a token has expired before a corresponding revocation event may be purged from the backend. 7.1.35. role The following table outlines the options available under the [role] group in the /etc/keystone/keystone.conf file. Table 7.34. role Configuration option = Default value Type Description cache_time = None integer value Time to cache role data, in seconds. This has no effect unless both global caching and [role] caching are enabled. caching = True boolean value Toggle for role caching. This has no effect unless global caching is enabled. In a typical deployment, there is no reason to disable this. driver = None string value Entry point for the role backend driver in the keystone.role namespace. Keystone only provides a sql driver, so there's no reason to change this unless you are providing a custom entry point. list_limit = None integer value Maximum number of entities that will be returned in a role collection. This may be useful to tune if you have a large number of discrete roles in your deployment. 7.1.36. saml The following table outlines the options available under the [saml] group in the /etc/keystone/keystone.conf file. Table 7.35. saml Configuration option = Default value Type Description assertion_expiration_time = 3600 integer value Determines the lifetime for any SAML assertions generated by keystone, using NotOnOrAfter attributes. certfile = /etc/keystone/ssl/certs/signing_cert.pem string value Absolute path to the public certificate file to use for SAML signing. The value cannot contain a comma ( , ). idp_contact_company = Example, Inc. string value This is the company name of the identity provider's contact person. idp_contact_email = [email protected] string value This is the email address of the identity provider's contact person. idp_contact_name = SAML Identity Provider Support string value This is the given name of the identity provider's contact person. idp_contact_surname = Support string value This is the surname of the identity provider's contact person. idp_contact_telephone = +1 800 555 0100 string value This is the telephone number of the identity provider's contact person. idp_contact_type = other string value This is the type of contact that best describes the identity provider's contact person. idp_entity_id = None uri value This is the unique entity identifier of the identity provider (keystone) to use when generating SAML assertions. This value is required to generate identity provider metadata and must be a URI (a URL is recommended). For example: https://keystone.example.com/v3/OS-FEDERATION/saml2/idp . idp_lang = en string value This is the language used by the identity provider's organization. idp_metadata_path = /etc/keystone/saml2_idp_metadata.xml string value Absolute path to the identity provider metadata file. This file should be generated with the keystone-manage saml_idp_metadata command. There is typically no reason to change this value. idp_organization_display_name = OpenStack SAML Identity Provider string value This is the name of the identity provider's organization to be displayed. idp_organization_name = SAML Identity Provider string value This is the name of the identity provider's organization. idp_organization_url = https://example.com/ uri value This is the URL of the identity provider's organization. The URL referenced here should be useful to humans. idp_sso_endpoint = None uri value This is the single sign-on (SSO) service location of the identity provider which accepts HTTP POST requests. A value is required to generate identity provider metadata. For example: https://keystone.example.com/v3/OS-FEDERATION/saml2/sso . keyfile = /etc/keystone/ssl/private/signing_key.pem string value Absolute path to the private key file to use for SAML signing. The value cannot contain a comma ( , ). relay_state_prefix = ss:mem: string value The prefix of the RelayState SAML attribute to use when generating enhanced client and proxy (ECP) assertions. In a typical deployment, there is no reason to change this value. xmlsec1_binary = xmlsec1 string value Name of, or absolute path to, the binary to be used for XML signing. Although only the XML Security Library ( xmlsec1 ) is supported, it may have a non-standard name or path on your system. If keystone cannot find the binary itself, you may need to install the appropriate package, use this option to specify an absolute path, or adjust keystone's PATH environment variable. 7.1.37. security_compliance The following table outlines the options available under the [security_compliance] group in the /etc/keystone/keystone.conf file. Table 7.36. security_compliance Configuration option = Default value Type Description change_password_upon_first_use = False boolean value Enabling this option requires users to change their password when the user is created, or upon administrative reset. Before accessing any services, affected users will have to change their password. To ignore this requirement for specific users, such as service users, set the options attribute ignore_change_password_upon_first_use to True for the desired user via the update user API. This feature is disabled by default. This feature is only applicable with the sql backend for the [identity] driver . disable_user_account_days_inactive = None integer value The maximum number of days a user can go without authenticating before being considered "inactive" and automatically disabled (locked). This feature is disabled by default; set any value to enable it. This feature depends on the sql backend for the [identity] driver . When a user exceeds this threshold and is considered "inactive", the user's enabled attribute in the HTTP API may not match the value of the user's enabled column in the user table. lockout_duration = 1800 integer value The number of seconds a user account will be locked when the maximum number of failed authentication attempts (as specified by [security_compliance] lockout_failure_attempts ) is exceeded. Setting this option will have no effect unless you also set [security_compliance] lockout_failure_attempts to a non-zero value. This feature depends on the sql backend for the [identity] driver . lockout_failure_attempts = None integer value The maximum number of times that a user can fail to authenticate before the user account is locked for the number of seconds specified by [security_compliance] lockout_duration . This feature is disabled by default. If this feature is enabled and [security_compliance] lockout_duration is not set, then users may be locked out indefinitely until the user is explicitly enabled via the API. This feature depends on the sql backend for the [identity] driver . minimum_password_age = 0 integer value The number of days that a password must be used before the user can change it. This prevents users from changing their passwords immediately in order to wipe out their password history and reuse an old password. This feature does not prevent administrators from manually resetting passwords. It is disabled by default and allows for immediate password changes. This feature depends on the sql backend for the [identity] driver . Note: If [security_compliance] password_expires_days is set, then the value for this option should be less than the password_expires_days . password_expires_days = None integer value The number of days for which a password will be considered valid before requiring it to be changed. This feature is disabled by default. If enabled, new password changes will have an expiration date, however existing passwords would not be impacted. This feature depends on the sql backend for the [identity] driver . password_regex = None string value The regular expression used to validate password strength requirements. By default, the regular expression will match any password. The following is an example of a pattern which requires at least 1 letter, 1 digit, and have a minimum length of 7 characters: ^(?=. \\d)(?=. [a-zA-Z]).{7,}USD This feature depends on the sql backend for the [identity] driver . password_regex_description = None string value Describe your password regular expression here in language for humans. If a password fails to match the regular expression, the contents of this configuration variable will be returned to users to explain why their requested password was insufficient. unique_last_password_count = 0 integer value This controls the number of user password iterations to keep in history, in order to enforce that newly created passwords are unique. The total number which includes the new password should not be greater or equal to this value. Setting the value to zero (the default) disables this feature. Thus, to enable this feature, values must be greater than 0. This feature depends on the sql backend for the [identity] driver . 7.1.38. shadow_users The following table outlines the options available under the [shadow_users] group in the /etc/keystone/keystone.conf file. Table 7.37. shadow_users Configuration option = Default value Type Description driver = sql string value Entry point for the shadow users backend driver in the keystone.identity.shadow_users namespace. This driver is used for persisting local user references to externally-managed identities (via federation, LDAP, etc). Keystone only provides a sql driver, so there is no reason to change this option unless you are providing a custom entry point. 7.1.39. token The following table outlines the options available under the [token] group in the /etc/keystone/keystone.conf file. Table 7.38. token Configuration option = Default value Type Description allow_expired_window = 172800 integer value This controls the number of seconds that a token can be retrieved for beyond the built-in expiry time. This allows long running operations to succeed. Defaults to two days. allow_rescope_scoped_token = True boolean value This toggles whether scoped tokens may be re-scoped to a new project or domain, thereby preventing users from exchanging a scoped token (including those with a default project scope) for any other token. This forces users to either authenticate for unscoped tokens (and later exchange that unscoped token for tokens with a more specific scope) or to provide their credentials in every request for a scoped token to avoid re-scoping altogether. cache_on_issue = True boolean value Enable storing issued token data to token validation cache so that first token validation doesn't actually cause full validation cycle. This option has no effect unless global caching is enabled and will still cache tokens even if [token] caching = False . Deprecated since: S *Reason:*Keystone already exposes a configuration option for caching tokens. Having a separate configuration option to cache tokens when they are issued is redundant, unnecessarily complicated, and is misleading if token caching is disabled because tokens will still be pre-cached by default when they are issued. The ability to pre-cache tokens when they are issued is going to rely exclusively on the ``keystone.conf [token] caching`` option in the future. cache_time = None integer value The number of seconds to cache token creation and validation data. This has no effect unless both global and [token] caching are enabled. caching = True boolean value Toggle for caching token creation and validation data. This has no effect unless global caching is enabled. expiration = 3600 integer value The amount of time that a token should remain valid (in seconds). Drastically reducing this value may break "long-running" operations that involve multiple services to coordinate together, and will force users to authenticate with keystone more frequently. Drastically increasing this value will increase the number of tokens that will be simultaneously valid. Keystone tokens are also bearer tokens, so a shorter duration will also reduce the potential security impact of a compromised token. provider = fernet string value Entry point for the token provider in the keystone.token.provider namespace. The token provider controls the token construction, validation, and revocation operations. Supported upstream providers are fernet and jws . Neither fernet or jws tokens require persistence and both require additional setup. If using fernet , you're required to run keystone-manage fernet_setup , which creates symmetric keys used to encrypt tokens. If using jws , you're required to generate an ECDSA keypair using a SHA-256 hash algorithm for signing and validating token, which can be done with keystone-manage create_jws_keypair . Note that fernet tokens are encrypted and jws tokens are only signed. Please be sure to consider this if your deployment has security requirements regarding payload contents used to generate token IDs. revoke_by_id = True boolean value This toggles support for revoking individual tokens by the token identifier and thus various token enumeration operations (such as listing all tokens issued to a specific user). These operations are used to determine the list of tokens to consider revoked. Do not disable this option if you're using the kvs [revoke] driver . 7.1.40. tokenless_auth The following table outlines the options available under the [tokenless_auth] group in the /etc/keystone/keystone.conf file. Table 7.39. tokenless_auth Configuration option = Default value Type Description issuer_attribute = SSL_CLIENT_I_DN string value The name of the WSGI environment variable used to pass the issuer of the client certificate to keystone. This attribute is used as an identity provider ID for the X.509 tokenless authorization along with the protocol to look up its corresponding mapping. In a typical deployment, there is no reason to change this value. protocol = x509 string value The federated protocol ID used to represent X.509 tokenless authorization. This is used in combination with the value of [tokenless_auth] issuer_attribute to find a corresponding federated mapping. In a typical deployment, there is no reason to change this value. trusted_issuer = [] multi valued The list of distinguished names which identify trusted issuers of client certificates allowed to use X.509 tokenless authorization. If the option is absent then no certificates will be allowed. The format for the values of a distinguished name (DN) must be separated by a comma and contain no spaces. Furthermore, because an individual DN may contain commas, this configuration option may be repeated multiple times to represent multiple values. For example, keystone.conf would include two consecutive lines in order to trust two different DNs, such as trusted_issuer = CN=john,OU=keystone,O=openstack and trusted_issuer = CN=mary,OU=eng,O=abc . 7.1.41. totp The following table outlines the options available under the [totp] group in the /etc/keystone/keystone.conf file. Table 7.40. totp Configuration option = Default value Type Description included_previous_windows = 1 integer value The number of windows to check when processing TOTP passcodes. 7.1.42. trust The following table outlines the options available under the [trust] group in the /etc/keystone/keystone.conf file. Table 7.41. trust Configuration option = Default value Type Description allow_redelegation = False boolean value Allows authorization to be redelegated from one user to another, effectively chaining trusts together. When disabled, the remaining_uses attribute of a trust is constrained to be zero. driver = sql string value Entry point for the trust backend driver in the keystone.trust namespace. Keystone only provides a sql driver, so there is no reason to change this unless you are providing a custom entry point. max_redelegation_count = 3 integer value Maximum number of times that authorization can be redelegated from one user to another in a chain of trusts. This number may be reduced further for a specific trust. 7.1.43. unified_limit The following table outlines the options available under the [unified_limit] group in the /etc/keystone/keystone.conf file. Table 7.42. unified_limit Configuration option = Default value Type Description cache_time = None integer value Time to cache unified limit data, in seconds. This has no effect unless both global caching and [unified_limit] caching are enabled. caching = True boolean value Toggle for unified limit caching. This has no effect unless global caching is enabled. In a typical deployment, there is no reason to disable this. driver = sql string value Entry point for the unified limit backend driver in the keystone.unified_limit namespace. Keystone only provides a sql driver, so there's no reason to change this unless you are providing a custom entry point. enforcement_model = flat string value The enforcement model to use when validating limits associated to projects. Enforcement models will behave differently depending on the existing limits, which may result in backwards incompatible changes if a model is switched in a running deployment. list_limit = None integer value Maximum number of entities that will be returned in a unified limit collection. This may be useful to tune if you have a large number of unified limits in your deployment. 7.1.44. wsgi The following table outlines the options available under the [wsgi] group in the /etc/keystone/keystone.conf file. Table 7.43. wsgi Configuration option = Default value Type Description debug_middleware = False boolean value If set to true, this enables the oslo debug middleware in Keystone. This Middleware prints a lot of information about the request and the response. It is useful for getting information about the data on the wire (decoded) and passed to the WSGI application pipeline. This middleware has no effect on the "debug" setting in the [DEFAULT] section of the config file or setting Keystone's log-level to "DEBUG"; it is specific to debugging the WSGI data as it enters and leaves Keystone (specific request-related data). This option is used for introspection on the request and response data between the web server (apache, nginx, etc) and Keystone. This middleware is inserted as the first element in the middleware chain and will show the data closest to the wire. WARNING: NOT INTENDED FOR USE IN PRODUCTION. THIS MIDDLEWARE CAN AND WILL EMIT SENSITIVE/PRIVILEGED DATA.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuration_reference/keystone
Chapter 2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack
Chapter 2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack Before you install a OpenShift Container Platform cluster that uses single-root I/O virtualization (SR-IOV) or Open vSwitch with the Data Plane Development Kit (OVS-DPDK) on Red Hat OpenStack Platform (RHOSP), you must understand the requirements for each technology and then perform preparatory tasks. 2.1. Requirements for clusters on RHOSP that use either SR-IOV or OVS-DPDK If you use SR-IOV or OVS-DPDK with your deployment, you must meet the following requirements: RHOSP compute nodes must use a flavor that supports huge pages. 2.1.1. Requirements for clusters on RHOSP that use SR-IOV To use single-root I/O virtualization (SR-IOV) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) SR-IOV deployment . OpenShift Container Platform must support the NICs that you use. For a list of supported NICs, see "About Single Root I/O Virtualization (SR-IOV) hardware networks" in the "Hardware networks" subsection of the "Networking" documentation. For each node that will have an attached SR-IOV NIC, your RHOSP cluster must have: One instance from the RHOSP quota One port attached to the machines subnet One port for each SR-IOV Virtual Function A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 2.1.2. Requirements for clusters on RHOSP that use OVS-DPDK To use Open vSwitch with the Data Plane Development Kit (OVS-DPDK) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) OVS-DPDK deployment by referring to Planning your OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. Configure your RHOSP OVS-DPDK deployment according to Configuring an OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. 2.2. Preparing to install a cluster that uses SR-IOV You must configure RHOSP before you install a cluster that uses SR-IOV on it. When installing a cluster using SR-IOV, you must deploy clusters using cgroup v1. For more information, Enabling Linux control group version 1 (cgroup v1) . 2.2.1. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 2.3. Preparing to install a cluster that uses OVS-DPDK You must configure RHOSP before you install a cluster that uses SR-IOV on it. Complete Creating a flavor and deploying an instance for OVS-DPDK before you install a cluster on RHOSP. After you perform preinstallation tasks, install your cluster by following the most relevant OpenShift Container Platform on RHOSP installation instructions. Then, perform the tasks under " steps" on this page. 2.4. steps For either type of deployment: Configure the Node Tuning Operator with huge pages support . To complete SR-IOV configuration after you deploy your cluster: Install the SR-IOV Operator . Configure your SR-IOV network device . Create SR-IOV compute machines . Consult the following references after you deploy your cluster to improve its performance: A test pod template for clusters that use OVS-DPDK on OpenStack . A test pod template for clusters that use SR-IOV on OpenStack . A performance profile template for clusters that use OVS-DPDK on OpenStack .
[ "openstack network create radio --provider-physical-network radio --provider-network-type flat --external", "openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external", "openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio", "openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_openstack/installing-openstack-nfv-preparing
function::pid2execname
function::pid2execname Name function::pid2execname - The name of the given process identifier Synopsis Arguments pid process identifier Description Return the name of the given process id.
[ "pid2execname:string(pid:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-pid2execname
Chapter 3. Creating an application using .NET 8.0
Chapter 3. Creating an application using .NET 8.0 Learn how to create a C# hello-world application. Procedure Create a new Console application in a directory called my-app : The output returns: A simple Hello World console application is created from a template. The application is stored in the specified my-app directory. Verification steps Run the project: The output returns:
[ "dotnet new console --output my-app", "The template \"Console Application\" was created successfully. Processing post-creation actions Running 'dotnet restore' on my-app /my-app.csproj Determining projects to restore Restored /home/ username / my-app /my-app.csproj (in 67 ms). Restore succeeded.", "dotnet run --project my-app", "Hello World!" ]
https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_9/creating-an-application-using-dotnet_getting-started-with-dotnet-on-rhel-9
Preface
Preface Securing your software supply chain is critical to prevent software vulnerabilities. Red Hat Trusted Application Pipeline embeds security throughout the software development lifecycle (SDLC), enabling teams to innovate confidently while adhering to the highest security standards.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/about_red_hat_trusted_application_pipeline/pr01
Chapter 3. Monitoring a Ceph storage cluster
Chapter 3. Monitoring a Ceph storage cluster As a storage administrator, you can monitor the overall health of the Red Hat Ceph Storage cluster, along with monitoring the health of the individual components of Ceph. Once you have a running Red Hat Ceph Storage cluster, you might begin monitoring the storage cluster to ensure that the Ceph Monitor and Ceph OSD daemons are running, at a high-level. Ceph storage cluster clients connect to a Ceph Monitor and receive the latest version of the storage cluster map before they can read and write data to the Ceph pools within the storage cluster. So the monitor cluster must have agreement on the state of the cluster before Ceph clients can read and write data. Ceph OSDs must peer the placement groups on the primary OSD with the copies of the placement groups on secondary OSDs. If faults arise, peering will reflect something other than the active + clean state. 3.1. High-level monitoring of a Ceph storage cluster As a storage administrator, you can monitor the health of the Ceph daemons to ensure that they are up and running. High level monitoring also involves checking the storage cluster capacity to ensure that the storage cluster does not exceed its full ratio . The Red Hat Ceph Storage Dashboard is the most common way to conduct high-level monitoring. However, you can also use the command-line interface, the Ceph admin socket or the Ceph API to monitor the storage cluster. 3.1.1. Checking the storage cluster health After you start the Ceph storage cluster, and before you start reading or writing data, check the storage cluster's health first. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example You can check on the health of the Ceph storage cluster with the following command: Example You can check the status of the Ceph storage cluster by running ceph status command: Example The output provides the following information: Cluster ID Cluster health status The monitor map epoch and the status of the monitor quorum. The OSD map epoch and the status of OSDs. The status of Ceph Managers. The status of Object Gateways. The placement group map version. The number of placement groups and pools. The notional amount of data stored and the number of objects stored. The total amount of data stored. The IO client operations. An update on the upgrade process if the cluster is upgrading. Upon starting the Ceph cluster, you will likely encounter a health warning such as HEALTH_WARN XXX num placement groups stale . Wait a few moments and check it again. When the storage cluster is ready, ceph health should return a message such as HEALTH_OK . At that point, it is okay to begin using the cluster. 3.1.2. Watching storage cluster events You can watch events that are happening with the Ceph storage cluster using the command-line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To watch the cluster's ongoing events, run the following command: Example 3.1.3. How Ceph calculates data usage The used value reflects the actual amount of raw storage used. The xxx GB / xxx GB value means the amount available, the lesser of the two numbers, of the overall storage capacity of the cluster. The notional number reflects the size of the stored data before it is replicated, cloned or snapshotted. Therefore, the amount of data actually stored typically exceeds the notional amount stored, because Ceph creates replicas of the data and may also use storage capacity for cloning and snapshotting. 3.1.4. Understanding the storage clusters usage stats To check a cluster's data usage and data distribution among pools, use the df option. It is similar to the Linux df command. The SIZE / AVAIL / RAW USED in the ceph df and ceph status command output are different if some OSDs are marked OUT of the cluster compared to when all OSDs are IN . The SIZE / AVAIL / RAW USED is calculated from sum of SIZE (osd disk size), RAW USE (total used space on disk), and AVAIL of all OSDs which are in IN state. You can see the total of SIZE / AVAIL / RAW USED for all OSDs in ceph osd df tree command output. Example The ceph df detail command gives more details about other pool statistics such as quota objects, quota bytes, used compression, and under compression. The RAW STORAGE section of the output provides an overview of the amount of storage the storage cluster manages for data. CLASS: The class of OSD device. SIZE: The amount of storage capacity managed by the storage cluster. In the above example, if the SIZE is 90 GiB, it is the total size without the replication factor, which is three by default. The total available capacity with the replication factor is 90 GiB/3 = 30 GiB. Based on the full ratio, which is 0.85% by default, the maximum available space is 30 GiB * 0.85 = 25.5 GiB AVAIL: The amount of free space available in the storage cluster. In the above example, if the SIZE is 90 GiB and the USED space is 6 GiB, then the AVAIL space is 84 GiB. The total available space with the replication factor, which is three by default, is 84 GiB/3 = 28 GiB USED: The amount of raw storage consumed by user data. In the above example, 100 MiB is the total space available after considering the replication factor. The actual available size is 33 MiB. RAW USED: The amount of raw storage consumed by user data, internal overhead, or reserved capacity. % RAW USED: The percentage of RAW USED . Use this number in conjunction with the full ratio and near full ratio to ensure that you are not reaching the storage cluster's capacity. The POOLS section of the output provides a list of pools and the notional usage of each pool. The output from this section DOES NOT reflect replicas, clones or snapshots. For example, if you store an object with 1 MB of data, the notional usage will be 1 MB, but the actual usage may be 3 MB or more depending on the number of replicas for example, size = 3 , clones and snapshots. POOL: The name of the pool. ID: The pool ID. STORED: The actual amount of data stored by the user in the pool. This value changes based on the raw usage data based on (k+M)/K values, number of object copies, and the number of objects degraded at the time of pool stats calculation. OBJECTS: The notional number of objects stored per pool. It is STORED size * replication factor. USED: The notional amount of data stored in kilobytes, unless the number appends M for megabytes or G for gigabytes. %USED: The notional percentage of storage used per pool. MAX AVAIL: An estimate of the notional amount of data that can be written to this pool. It is the amount of data that can be used before the first OSD becomes full. It considers the projected distribution of data across disks from the CRUSH map and uses the first OSD to fill up as the target. In the above example, MAX AVAIL is 153.85 MB without considering the replication factor, which is three by default. See the Red Hat Knowledgebase article titled ceph df MAX AVAIL is incorrect for simple replicated pool to calculate the value of MAX AVAIL . QUOTA OBJECTS: The number of quota objects. QUOTA BYTES: The number of bytes in the quota objects. USED COMPR: The amount of space allocated for compressed data including his includes compressed data, allocation, replication and erasure coding overhead. UNDER COMPR: The amount of data passed through compression and beneficial enough to be stored in a compressed form. Note The numbers in the POOLS section are notional. They are not inclusive of the number of replicas, snapshots or clones. As a result, the sum of the USED and %USED amounts will not add up to the RAW USED and %RAW USED amounts in the GLOBAL section of the output. Note The MAX AVAIL value is a complicated function of the replication or erasure code used, the CRUSH rule that maps storage to devices, the utilization of those devices, and the configured mon_osd_full_ratio . Additional Resources See How Ceph calculates data usage for details. See Understanding the OSD usage stats for details. 3.1.5. Understanding the OSD usage stats Use the ceph osd df command to view OSD utilization stats. Example ID: The name of the OSD. CLASS: The type of devices the OSD uses. WEIGHT: The weight of the OSD in the CRUSH map. REWEIGHT: The default reweight value. SIZE: The overall storage capacity of the OSD. USE: The OSD capacity. DATA: The amount of OSD capacity that is used by user data. OMAP: An estimate value of the bluefs storage that is being used to store object map ( omap ) data (key value pairs stored in rocksdb ). META: The bluefs space allocated, or the value set in the bluestore_bluefs_min parameter, whichever is larger, for internal metadata which is calculated as the total space allocated in bluefs minus the estimated omap data size. AVAIL: The amount of free space available on the OSD. %USE: The notional percentage of storage used by the OSD VAR: The variation above or below average utilization. PGS: The number of placement groups in the OSD. MIN/MAX VAR: The minimum and maximum variation across all OSDs. Additional Resources See How Ceph calculates data usage for details. See Understanding the OSD usage stats for details. See CRUSH Weights in Red Hat Ceph Storage Storage Strategies Guide for details. 3.1.6. Checking the storage cluster status You can check the status of the Red Hat Ceph Storage cluster from the command-line interface. The status sub command or the -s argument will display the current status of the storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To check a storage cluster's status, execute the following: Example Or Example In interactive mode, type ceph and press Enter : Example 3.1.7. Checking the Ceph Monitor status If the storage cluster has multiple Ceph Monitors, which is a requirement for a production Red Hat Ceph Storage cluster, then you can check the Ceph Monitor quorum status after starting the storage cluster, and before doing any reading or writing of data. A quorum must be present when multiple Ceph Monitors are running. Check the Ceph Monitor status periodically to ensure that they are running. If there is a problem with the Ceph Monitor, that prevents an agreement on the state of the storage cluster, the fault can prevent Ceph clients from reading and writing data. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To display the Ceph Monitor map, execute the following: Example or Example To check the quorum status for the storage cluster, execute the following: Ceph returns the quorum status. Example 3.1.8. Using the Ceph administration socket Use the administration socket to interact with a given daemon directly by using a UNIX socket file. For example, the socket enables you to: List the Ceph configuration at runtime Set configuration values at runtime directly without relying on Monitors. This is useful when Monitors are down . Dump historic operations Dump the operation priority queue state Dump operations without rebooting Dump performance counters In addition, using the socket is helpful when troubleshooting problems related to Ceph Monitors or OSDs. Regardless, if the daemon is not running, a following error is returned when attempting to use the administration socket: Important The administration socket is only available while a daemon is running. When you shut down the daemon properly, the administration socket is removed. However, if the daemon terminates unexpectedly, the administration socket might persist. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To use the socket: Syntax Replace: MONITOR_ID of the daemon COMMAND with the command to run. Use help to list the available commands for a given daemon. To view the status of a Ceph Monitor: Example Example Alternatively, specify the Ceph daemon by using its socket file: Syntax To view the status of a Ceph OSD named osd.0 on the specific host: Example Note You can use help instead of status for the various options that are available for the specific daemon. To list all socket files for the Ceph processes: Example Additional Resources See the Red Hat Ceph Storage Troubleshooting Guide for more information. 3.1.9. Understanding the Ceph OSD status A Ceph OSD's status is either in the storage cluster, or out of the storage cluster. It is either up and running, or it is down and not running. If a Ceph OSD is up , it can be either in the storage cluster, where data can be read and written, or it is out of the storage cluster. If it was in the storage cluster and recently moved out of the storage cluster, Ceph starts migrating placement groups to other Ceph OSDs. If a Ceph OSD is out of the storage cluster, CRUSH will not assign placement groups to the Ceph OSD. If a Ceph OSD is down , it should also be out . Note If a Ceph OSD is down and in , there is a problem, and the storage cluster will not be in a healthy state. If you execute a command such as ceph health , ceph -s or ceph -w , you might notice that the storage cluster does not always echo back HEALTH OK . Do not panic. With respect to Ceph OSDs, you can expect that the storage cluster will NOT echo HEALTH OK in a few expected circumstances: You have not started the storage cluster yet, and it is not responding. You have just started or restarted the storage cluster, and it is not ready yet, because the placement groups are getting created and the Ceph OSDs are in the process of peering. You just added or removed a Ceph OSD. You just modified the storage cluster map. An important aspect of monitoring Ceph OSDs is to ensure that when the storage cluster is up and running that all Ceph OSDs that are in the storage cluster are up and running, too. To see if all OSDs are running, execute: Example or Example The result should tell you the map epoch, eNNNN , the total number of OSDs, x , how many, y , are up , and how many, z , are in : If the number of Ceph OSDs that are in the storage cluster are more than the number of Ceph OSDs that are up . Execute the following command to identify the ceph-osd daemons that are not running: Example Tip The ability to search through a well-designed CRUSH hierarchy can help you troubleshoot the storage cluster by identifying the physical locations faster. If a Ceph OSD is down , connect to the node and start it. You can use Red Hat Storage Console to restart the Ceph OSD daemon, or you can use the command line. Syntax Example Additional Resources See the Red Hat Ceph Storage Dashboard Guide for more details. 3.2. Low-level monitoring of a Ceph storage cluster As a storage administrator, you can monitor the health of a Red Hat Ceph Storage cluster from a low-level perspective. Low-level monitoring typically involves ensuring that Ceph OSDs are peering properly. When peering faults occur, placement groups operate in a degraded state. This degraded state can be the result of many different things, such as hardware failure, a hung or crashed Ceph daemon, network latency, or a complete site outage. 3.2.1. Monitoring Placement Group Sets When CRUSH assigns placement groups to Ceph OSDs, it looks at the number of replicas for the pool and assigns the placement group to Ceph OSDs such that each replica of the placement group gets assigned to a different Ceph OSD. For example, if the pool requires three replicas of a placement group, CRUSH may assign them to osd.1 , osd.2 and osd.3 respectively. CRUSH actually seeks a pseudo-random placement that will take into account failure domains you set in the CRUSH map, so you will rarely see placement groups assigned to nearest neighbor Ceph OSDs in a large cluster. We refer to the set of Ceph OSDs that should contain the replicas of a particular placement group as the Acting Set . In some cases, an OSD in the Acting Set is down or otherwise not able to service requests for objects in the placement group. When these situations arise, do not panic. Common examples include: You added or removed an OSD. Then, CRUSH reassigned the placement group to other Ceph OSDs, thereby changing the composition of the acting set and spawning the migration of data with a "backfill" process. A Ceph OSD was down , was restarted and is now recovering . A Ceph OSD in the acting set is down or unable to service requests, and another Ceph OSD has temporarily assumed its duties. Ceph processes a client request using the Up Set , which is the set of Ceph OSDs that actually handle the requests. In most cases, the up set and the Acting Set are virtually identical. When they are not, it can indicate that Ceph is migrating data, an Ceph OSD is recovering, or that there is a problem, that is, Ceph usually echoes a HEALTH WARN state with a "stuck stale" message in such scenarios. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To retrieve a list of placement groups: Example View which Ceph OSDs are in the Acting Set or in the Up Set for a given placement group: Syntax Example Note If the Up Set and Acting Set do not match, this may be an indicator that the storage cluster rebalancing itself or of a potential problem with the storage cluster. 3.2.2. Ceph OSD peering Before you can write data to a placement group, it must be in an active state, and it should be in a clean state. For Ceph to determine the current state of a placement group, the primary OSD of the placement group that is, the first OSD in the acting set, peers with the secondary and tertiary OSDs to establish agreement on the current state of the placement group. Assuming a pool with three replicas of the PG. Figure 3.1. Peering 3.2.3. Placement Group States If you execute a command such as ceph health , ceph -s or ceph -w , you may notice that the cluster does not always echo back HEALTH OK . After you check to see if the OSDs are running, you should also check placement group states. You should expect that the cluster will NOT echo HEALTH OK in a number of placement group peering-related circumstances: You have just created a pool and placement groups haven't peered yet. The placement groups are recovering. You have just added an OSD to or removed an OSD from the cluster. You have just modified the CRUSH map and the placement groups are migrating. There is inconsistent data in different replicas of a placement group. Ceph is scrubbing a placement group's replicas. Ceph doesn't have enough storage capacity to complete backfilling operations. If one of the foregoing circumstances causes Ceph to echo HEALTH WARN , don't panic. In many cases, the cluster will recover on its own. In some cases, you may need to take action. An important aspect of monitoring placement groups is to ensure that when the cluster is up and running that all placement groups are active , and preferably in the clean state. To see the status of all placement groups, execute: Example The result should tell you the placement group map version, vNNNNNN , the total number of placement groups, x , and how many placement groups, y , are in a particular state such as active+clean : Note It is common for Ceph to report multiple states for placement groups. Snapshot Trimming PG States When snapshots exist, two additional PG states will be reported. snaptrim : The PGs are currently being trimmed snaptrim_wait : The PGs are waiting to be trimmed Example Output: In addition to the placement group states, Ceph will also echo back the amount of data used, aa , the amount of storage capacity remaining, bb , and the total storage capacity for the placement group. These numbers can be important in a few cases: You are reaching the near full ratio or full ratio . Your data isn't getting distributed across the cluster due to an error in the CRUSH configuration. Placement Group IDs Placement group IDs consist of the pool number, and not the pool name, followed by a period (.) and the placement group ID- a hexadecimal number. You can view pool numbers and their names from the output of ceph osd lspools . The default pool names data , metadata and rbd correspond to pool numbers 0 , 1 and 2 respectively. A fully qualified placement group ID has the following form: Syntax Example output: To retrieve a list of placement groups: Example To format the output in JSON format and save it to a file: Syntax Example Query a particular placement group: Syntax Example Additional Resources See the chapter Object Storage Daemon (OSD) configuration options in the OSD Object storage daemon configuratopn options section in Red Hat Ceph Storage Configuration Guide for more details on the snapshot trimming settings. 3.2.4. Placement Group creating state When you create a pool, it will create the number of placement groups you specified. Ceph will echo creating when it is creating one or more placement groups. Once they are created, the OSDs that are part of a placement group's Acting Set will peer. Once peering is complete, the placement group status should be active+clean , which means a Ceph client can begin writing to the placement group. 3.2.5. Placement group peering state When Ceph is Peering a placement group, Ceph is bringing the OSDs that store the replicas of the placement group into agreement about the state of the objects and metadata in the placement group. When Ceph completes peering, this means that the OSDs that store the placement group agree about the current state of the placement group. However, completion of the peering process does NOT mean that each replica has the latest contents. Authoritative History Ceph will NOT acknowledge a write operation to a client, until all OSDs of the acting set persist the write operation. This practice ensures that at least one member of the acting set will have a record of every acknowledged write operation since the last successful peering operation. With an accurate record of each acknowledged write operation, Ceph can construct and disseminate a new authoritative history of the placement group. A complete, and fully ordered set of operations that, if performed, would bring an OSD's copy of a placement group up to date. 3.2.6. Placement group active state Once Ceph completes the peering process, a placement group may become active . The active state means that the data in the placement group is generally available in the primary placement group and the replicas for read and write operations. 3.2.7. Placement Group clean state When a placement group is in the clean state, the primary OSD and the replica OSDs have successfully peered and there are no stray replicas for the placement group. Ceph replicated all objects in the placement group the correct number of times. 3.2.8. Placement Group degraded state When a client writes an object to the primary OSD, the primary OSD is responsible for writing the replicas to the replica OSDs. After the primary OSD writes the object to storage, the placement group will remain in a degraded state until the primary OSD has received an acknowledgement from the replica OSDs that Ceph created the replica objects successfully. The reason a placement group can be active+degraded is that an OSD may be active even though it doesn't hold all of the objects yet. If an OSD goes down , Ceph marks each placement group assigned to the OSD as degraded . The Ceph OSDs must peer again when the Ceph OSD comes back online. However, a client can still write a new object to a degraded placement group if it is active . If an OSD is down and the degraded condition persists, Ceph may mark the down OSD as out of the cluster and remap the data from the down OSD to another OSD. The time between being marked down and being marked out is controlled by mon_osd_down_out_interval , which is set to 600 seconds by default. A placement group can also be degraded , because Ceph cannot find one or more objects that Ceph thinks should be in the placement group. While you cannot read or write to unfound objects, you can still access all of the other objects in the degraded placement group. For example, if there are nine OSDs in a three way replica pool. If OSD number 9 goes down, the PGs assigned to OSD 9 goes into a degraded state. If OSD 9 does not recover, it goes out of the storage cluster and the storage cluster rebalances. In that scenario, the PGs are degraded and then recover to an active state. 3.2.9. Placement Group recovering state Ceph was designed for fault-tolerance at a scale where hardware and software problems are ongoing. When an OSD goes down , its contents may fall behind the current state of other replicas in the placement groups. When the OSD is back up , the contents of the placement groups must be updated to reflect the current state. During that time period, the OSD may reflect a recovering state. Recovery is not always trivial, because a hardware failure might cause a cascading failure of multiple Ceph OSDs. For example, a network switch for a rack or cabinet may fail, which can cause the OSDs of a number of host machines to fall behind the current state of the storage cluster. Each one of the OSDs must recover once the fault is resolved. Ceph provides a number of settings to balance the resource contention between new service requests and the need to recover data objects and restore the placement groups to the current state. The osd recovery delay start setting allows an OSD to restart, re-peer and even process some replay requests before starting the recovery process. The osd recovery threads setting limits the number of threads for the recovery process, by default one thread. The osd recovery thread timeout sets a thread timeout, because multiple Ceph OSDs can fail, restart and re-peer at staggered rates. The osd recovery max active setting limits the number of recovery requests a Ceph OSD works on simultaneously to prevent the Ceph OSD from failing to serve . The osd recovery max chunk setting limits the size of the recovered data chunks to prevent network congestion. 3.2.10. Back fill state When a new Ceph OSD joins the storage cluster, CRUSH will reassign placement groups from OSDs in the cluster to the newly added Ceph OSD. Forcing the new OSD to accept the reassigned placement groups immediately can put excessive load on the new Ceph OSD. Backfilling the OSD with the placement groups allows this process to begin in the background. Once backfilling is complete, the new OSD will begin serving requests when it is ready. During the backfill operations, you might see one of several states: backfill_wait indicates that a backfill operation is pending, but isn't underway yet backfill indicates that a backfill operation is underway backfill_too_full indicates that a backfill operation was requested, but couldn't be completed due to insufficient storage capacity. When a placement group cannot be backfilled, it can be considered incomplete . Ceph provides a number of settings to manage the load spike associated with reassigning placement groups to a Ceph OSD, especially a new Ceph OSD. By default, osd_max_backfills sets the maximum number of concurrent backfills to or from a Ceph OSD to 10. The osd backfill full ratio enables a Ceph OSD to refuse a backfill request if the OSD is approaching its full ratio, by default 85%. If an OSD refuses a backfill request, the osd backfill retry interval enables an OSD to retry the request, by default after 10 seconds. OSDs can also set osd backfill scan min and osd backfill scan max to manage scan intervals, by default 64 and 512. For some workloads, it is beneficial to avoid regular recovery entirely and use backfill instead. Since backfilling occurs in the background, this allows I/O to proceed on the objects in the OSD. You can force a backfill rather than a recovery by setting the osd_min_pg_log_entries option to 1 , and setting the osd_max_pg_log_entries option to 2 . Contact your Red Hat Support account team for details on when this situation is appropriate for your workload. 3.2.11. Placement Group remapped state When the Acting Set that services a placement group changes, the data migrates from the old acting set to the new acting set. It may take some time for a new primary OSD to service requests. So it may ask the old primary to continue to service requests until the placement group migration is complete. Once data migration completes, the mapping uses the primary OSD of the new acting set. 3.2.12. Placement Group stale state While Ceph uses heartbeats to ensure that hosts and daemons are running, the ceph-osd daemons may also get into a stuck state where they aren't reporting statistics in a timely manner. For example, a temporary network fault. By default, OSD daemons report their placement group, up thru, boot and failure statistics every half second, that is, 0.5 , which is more frequent than the heartbeat thresholds. If the Primary OSD of a placement group's acting set fails to report to the monitor or if other OSDs have reported the primary OSD down , the monitors will mark the placement group stale . When you start the storage cluster, it is common to see the stale state until the peering process completes. After the storage cluster has been running for awhile, seeing placement groups in the stale state indicates that the primary OSD for those placement groups is down or not reporting placement group statistics to the monitor. 3.2.13. Placement Group misplaced state There are some temporary backfilling scenarios where a PG gets mapped temporarily to an OSD. When that temporary situation should no longer be the case, the PGs might still reside in the temporary location and not in the proper location. In which case, they are said to be misplaced . That's because the correct number of extra copies actually exist, but one or more copies is in the wrong place. For example, there are 3 OSDs: 0,1,2 and all PGs map to some permutation of those three. If you add another OSD (OSD 3), some PGs will now map to OSD 3 instead of one of the others. However, until OSD 3 is backfilled, the PG will have a temporary mapping allowing it to continue to serve I/O from the old mapping. During that time, the PG is misplaced , because it has a temporary mapping, but not degraded , since there are 3 copies. Example [0,1,2] is a temporary mapping, so the up set is not equal to the acting set and the PG is misplaced but not degraded since [0,1,2] is still three copies. Example OSD 3 is now backfilled and the temporary mapping is removed, not degraded and not misplaced. 3.2.14. Placement Group incomplete state A PG goes into a incomplete state when there is incomplete content and peering fails, that is, when there are no complete OSDs which are current enough to perform recovery. Lets say OSD 1, 2, and 3 are the acting OSD set and it switches to OSD 1, 4, and 3, then osd.1 will request a temporary acting set of OSD 1, 2, and 3 while backfilling 4. During this time, if OSD 1, 2, and 3 all go down, osd.4 will be the only one left which might not have fully backfilled all the data. At this time, the PG will go incomplete indicating that there are no complete OSDs which are current enough to perform recovery. Alternately, if osd.4 is not involved and the acting set is simply OSD 1, 2, and 3 when OSD 1, 2, and 3 go down, the PG would likely go stale indicating that the mons have not heard anything on that PG since the acting set changed. The reason being there are no OSDs left to notify the new OSDs. 3.2.15. Identifying stuck Placement Groups A placement group is not necessarily problematic just because it is not in a active+clean state. Generally, Ceph's ability to self repair might not be working when placement groups get stuck. The stuck states include: Unclean : Placement groups contain objects that are not replicated the desired number of times. They should be recovering. Inactive : Placement groups cannot process reads or writes because they are waiting for an OSD with the most up-to-date data to come back up . Stale : Placement groups are in an unknown state, because the OSDs that host them have not reported to the monitor cluster in a while, and can be configured with the mon osd report timeout setting. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To identify stuck placement groups, execute the following: Syntax Example 3.2.16. Finding an object's location The Ceph client retrieves the latest cluster map and the CRUSH algorithm calculates how to map the object to a placement group, and then calculates how to assign the placement group to an OSD dynamically. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To find the object location, all you need is the object name and the pool name: Syntax Example
[ "root@host01 ~]# cephadm shell", "ceph health HEALTH_OK", "ceph status", "root@host01 ~]# cephadm shell", "ceph -w cluster: id: 8c9b0072-67ca-11eb-af06-001a4a0002a0 health: HEALTH_OK services: mon: 2 daemons, quorum Ceph5-2,Ceph5-adm (age 3d) mgr: Ceph5-1.nqikfh(active, since 3w), standbys: Ceph5-adm.meckej osd: 5 osds: 5 up (since 2d), 5 in (since 8w) rgw: 2 daemons active (test_realm.test_zone.Ceph5-2.bfdwcn, test_realm.test_zone.Ceph5-adm.acndrh) data: pools: 11 pools, 273 pgs objects: 459 objects, 32 KiB usage: 2.6 GiB used, 72 GiB / 75 GiB avail pgs: 273 active+clean io: client: 170 B/s rd, 730 KiB/s wr, 0 op/s rd, 729 op/s wr 2021-06-02 15:45:21.655871 osd.0 [INF] 17.71 deep-scrub ok 2021-06-02 15:45:47.880608 osd.1 [INF] 1.0 scrub ok 2021-06-02 15:45:48.865375 osd.1 [INF] 1.3 scrub ok 2021-06-02 15:45:50.866479 osd.1 [INF] 1.4 scrub ok 2021-06-02 15:45:01.345821 mon.0 [INF] pgmap v41339: 952 pgs: 952 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:05.718640 mon.0 [INF] pgmap v41340: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:53.997726 osd.1 [INF] 1.5 scrub ok 2021-06-02 15:45:06.734270 mon.0 [INF] pgmap v41341: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:15.722456 mon.0 [INF] pgmap v41342: 952 pgs: 952 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:46:06.836430 osd.0 [INF] 17.75 deep-scrub ok 2021-06-02 15:45:55.720929 mon.0 [INF] pgmap v41343: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail", "ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 5 TiB 2.9 TiB 2.1 TiB 2.1 TiB 42.98 TOTAL 5 TiB 2.9 TiB 2.1 TiB 2.1 TiB 42.98 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL .mgr 1 1 5.3 MiB 3 16 MiB 0 629 GiB .rgw.root 2 32 1.3 KiB 4 48 KiB 0 629 GiB default.rgw.log 3 32 3.6 KiB 209 408 KiB 0 629 GiB default.rgw.control 4 32 0 B 8 0 B 0 629 GiB default.rgw.meta 5 32 1.7 KiB 10 96 KiB 0 629 GiB default.rgw.buckets.index 7 32 5.5 MiB 22 17 MiB 0 629 GiB default.rgw.buckets.data 8 32 807 KiB 3 2.4 MiB 0 629 GiB default.rgw.buckets.non-ec 9 32 1.0 MiB 1 3.1 MiB 0 629 GiB source-ecpool-86 11 32 1.2 TiB 391.13k 2.1 TiB 53.49 1.1 TiB", "ceph osd df ID CLASS WEIGHT REWEIGHT SIZE USE DATA OMAP META AVAIL %USE VAR PGS 3 hdd 0.90959 1.00000 931GiB 70.1GiB 69.1GiB 0B 1GiB 861GiB 7.53 2.93 66 4 hdd 0.90959 1.00000 931GiB 1.30GiB 308MiB 0B 1GiB 930GiB 0.14 0.05 59 0 hdd 0.90959 1.00000 931GiB 18.1GiB 17.1GiB 0B 1GiB 913GiB 1.94 0.76 57 MIN/MAX VAR: 0.02/2.98 STDDEV: 2.91", "cephadm shell", "ceph status", "ceph -s", "ceph ceph> status cluster: id: 499829b4-832f-11eb-8d6d-001a4a000635 health: HEALTH_WARN 1 stray daemon(s) not managed by cephadm 1/3 mons down, quorum host03,host02 too many PGs per OSD (261 > max 250) services: mon: 3 daemons, quorum host03,host02 (age 3d), out of quorum: host01 mgr: host01.hdhzwn(active, since 9d), standbys: host05.eobuuv, host06.wquwpj osd: 12 osds: 11 up (since 2w), 11 in (since 5w) rgw: 2 daemons active (test_realm.test_zone.host04.hgbvnq, test_realm.test_zone.host05.yqqilm) rgw-nfs: 1 daemon active (nfs.foo.host06-rgw) data: pools: 8 pools, 960 pgs objects: 414 objects, 1.0 MiB usage: 5.7 GiB used, 214 GiB / 220 GiB avail pgs: 960 active+clean io: client: 41 KiB/s rd, 0 B/s wr, 41 op/s rd, 27 op/s wr ceph> health HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1/3 mons down, quorum host03,host02; too many PGs per OSD (261 > max 250) ceph> mon stat e3: 3 mons at {host01=[v2:10.74.255.0:3300/0,v1:10.74.255.0:6789/0],host02=[v2:10.74.249.253:3300/0,v1:10.74.249.253:6789/0],host03=[v2:10.74.251.164:3300/0,v1:10.74.251.164:6789/0]}, election epoch 6688, leader 1 host03, quorum 1,2 host03,host02", "cephadm shell", "ceph mon stat", "ceph mon dump", "ceph quorum_status -f json-pretty", "{ \"election_epoch\": 6686, \"quorum\": [ 0, 1, 2 ], \"quorum_names\": [ \"host01\", \"host03\", \"host02\" ], \"quorum_leader_name\": \"host01\", \"quorum_age\": 424884, \"features\": { \"quorum_con\": \"4540138297136906239\", \"quorum_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ] }, \"monmap\": { \"epoch\": 3, \"fsid\": \"499829b4-832f-11eb-8d6d-001a4a000635\", \"modified\": \"2021-03-15T04:51:38.621737Z\", \"created\": \"2021-03-12T12:35:16.911339Z\", \"min_mon_release\": 16, \"min_mon_release_name\": \"pacific\", \"election_strategy\": 1, \"disallowed_leaders: \": \"\", \"stretch_mode\": false, \"features\": { \"persistent\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"optional\": [] }, \"mons\": [ { \"rank\": 0, \"name\": \"host01\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.255.0:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.255.0:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.255.0:6789/0\", \"public_addr\": \"10.74.255.0:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 1, \"name\": \"host03\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.251.164:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.251.164:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.251.164:6789/0\", \"public_addr\": \"10.74.251.164:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 2, \"name\": \"host02\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.253:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.253:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.253:6789/0\", \"public_addr\": \"10.74.249.253:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" } ] } }", "Error 111: Connection Refused", "cephadm shell", "ceph daemon MONITOR_ID COMMAND", "ceph daemon mon.host01 help { \"add_bootstrap_peer_hint\": \"add peer address as potential bootstrap peer for cluster bringup\", \"add_bootstrap_peer_hintv\": \"add peer address vector as potential bootstrap peer for cluster bringup\", \"compact\": \"cause compaction of monitor's leveldb/rocksdb storage\", \"config diff\": \"dump diff of current config and default config\", \"config diff get\": \"dump diff get <field>: dump diff of current and default config setting <field>\", \"config get\": \"config get <field>: get the config value\", \"config help\": \"get config setting schema and descriptions\", \"config set\": \"config set <field> <val> [<val> ...]: set a config variable\", \"config show\": \"dump current config settings\", \"config unset\": \"config unset <field>: unset a config variable\", \"connection scores dump\": \"show the scores used in connectivity-based elections\", \"connection scores reset\": \"reset the scores used in connectivity-based elections\", \"counter dump\": \"dump all labeled and non-labeled counters and their values\", \"counter schema\": \"dump all labeled and non-labeled counters schemas\", \"dump_historic_ops\": \"show recent ops\", \"dump_historic_slow_ops\": \"show recent slow ops\", \"dump_mempools\": \"get mempool stats\", \"get_command_descriptions\": \"list available commands\", \"git_version\": \"get git sha1\", \"heap\": \"show heap usage info (available only if compiled with tcmalloc)\", \"help\": \"list available commands\", \"injectargs\": \"inject configuration arguments into running daemon\", \"log dump\": \"dump recent log entries to log file\", \"log flush\": \"flush log entries to log file\", \"log reopen\": \"reopen log file\", \"mon_status\": \"report status of monitors\", \"ops\": \"show the ops currently in flight\", \"perf dump\": \"dump non-labeled counters and their values\", \"perf histogram dump\": \"dump perf histogram values\", \"perf histogram schema\": \"dump perf histogram schema\", \"perf reset\": \"perf reset <name>: perf reset all or one perfcounter name\", \"perf schema\": \"dump non-labeled counters schemas\", \"quorum enter\": \"force monitor back into quorum\", \"quorum exit\": \"force monitor out of the quorum\", \"sessions\": \"list existing sessions\", \"smart\": \"Query health metrics for underlying device\", \"sync_force\": \"force sync of and clear monitor store\", \"version\": \"get ceph version\" }", "ceph daemon mon.host01 mon_status { \"name\": \"host01\", \"rank\": 0, \"state\": \"leader\", \"election_epoch\": 120, \"quorum\": [ 0, 1, 2 ], \"quorum_age\": 206358, \"features\": { \"required_con\": \"2449958747317026820\", \"required_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"quorum_con\": \"4540138297136906239\", \"quorum_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ] }, \"outside_quorum\": [], \"extra_probe_peers\": [], \"sync_provider\": [], \"monmap\": { \"epoch\": 3, \"fsid\": \"81a4597a-b711-11eb-8cb8-001a4a000740\", \"modified\": \"2021-05-18T05:50:17.782128Z\", \"created\": \"2021-05-17T13:13:13.383313Z\", \"min_mon_release\": 16, \"min_mon_release_name\": \"pacific\", \"election_strategy\": 1, \"disallowed_leaders: \": \"\", \"stretch_mode\": false, \"features\": { \"persistent\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"optional\": [] }, \"mons\": [ { \"rank\": 0, \"name\": \"host01\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.41:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.41:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.41:6789/0\", \"public_addr\": \"10.74.249.41:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 1, \"name\": \"host02\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.55:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.55:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.55:6789/0\", \"public_addr\": \"10.74.249.55:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 2, \"name\": \"host03\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.49:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.49:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.49:6789/0\", \"public_addr\": \"10.74.249.49:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" } ] }, \"feature_map\": { \"mon\": [ { \"features\": \"0x3f01cfb9fffdffff\", \"release\": \"luminous\", \"num\": 1 } ], \"osd\": [ { \"features\": \"0x3f01cfb9fffdffff\", \"release\": \"luminous\", \"num\": 3 } ] }, \"stretch_mode\": false }", "ceph daemon /var/run/ceph/ SOCKET_FILE COMMAND", "ceph daemon /var/run/ceph/ceph-osd.0.asok status { \"cluster_fsid\": \"9029b252-1668-11ee-9399-001a4a000429\", \"osd_fsid\": \"1de9b064-b7a5-4c54-9395-02ccda637d21\", \"whoami\": 0, \"state\": \"active\", \"oldest_map\": 1, \"newest_map\": 58, \"num_pgs\": 33 }", "ls /var/run/ceph", "ceph osd stat", "ceph osd dump", "eNNNN: x osds: y up, z in", "ceph osd tree id weight type name up/down reweight -1 3 pool default -3 3 rack mainrack -2 3 host osd-host 0 1 osd.0 up 1 1 1 osd.1 up 1 2 1 osd.2 up 1", "systemctl start CEPH_OSD_SERVICE_ID", "systemctl start [email protected]", "cephadm shell", "ceph pg dump", "ceph pg map PG_NUM", "ceph pg map 128", "ceph pg stat", "vNNNNNN: x pgs: y active+clean; z bytes data, aa MB used, bb GB / cc GB avail", "244 active+clean+snaptrim_wait 32 active+clean+snaptrim", "POOL_NUM . PG_ID", "0.1f", "ceph pg dump", "ceph pg dump -o FILE_NAME --format=json", "ceph pg dump -o test --format=json", "ceph pg POOL_NUM . PG_ID query", "ceph pg 5.fe query { \"snap_trimq\": \"[]\", \"snap_trimq_len\": 0, \"state\": \"active+clean\", \"epoch\": 2449, \"up\": [ 3, 8, 10 ], \"acting\": [ 3, 8, 10 ], \"acting_recovery_backfill\": [ \"3\", \"8\", \"10\" ], \"info\": { \"pgid\": \"5.ff\", \"last_update\": \"0'0\", \"last_complete\": \"0'0\", \"log_tail\": \"0'0\", \"last_user_version\": 0, \"last_backfill\": \"MAX\", \"purged_snaps\": [], \"history\": { \"epoch_created\": 114, \"epoch_pool_created\": 82, \"last_epoch_started\": 2402, \"last_interval_started\": 2401, \"last_epoch_clean\": 2402, \"last_interval_clean\": 2401, \"last_epoch_split\": 114, \"last_epoch_marked_full\": 0, \"same_up_since\": 2401, \"same_interval_since\": 2401, \"same_primary_since\": 2086, \"last_scrub\": \"0'0\", \"last_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"last_deep_scrub\": \"0'0\", \"last_deep_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"last_clean_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"prior_readable_until_ub\": 0 }, \"stats\": { \"version\": \"0'0\", \"reported_seq\": \"2989\", \"reported_epoch\": \"2449\", \"state\": \"active+clean\", \"last_fresh\": \"2021-06-18T05:16:59.401080+0000\", \"last_change\": \"2021-06-17T01:32:03.764162+0000\", \"last_active\": \"2021-06-18T05:16:59.401080+0000\", .", "pg 1.5: up=acting: [0,1,2] ADD_OSD_3 pg 1.5: up: [0,3,1] acting: [0,1,2]", "pg 1.5: up=acting: [0,3,1]", "ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>}", "ceph pg dump_stuck stale OK", "ceph osd map POOL_NAME OBJECT_NAME", "ceph osd map mypool myobject" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/administration_guide/monitoring-a-ceph-storage-cluster
Chapter 10. Using Fibre Channel devices
Chapter 10. Using Fibre Channel devices Red Hat Enterprise Linux 8 provides the following native Fibre Channel drivers: lpfc qla2xxx zfcp 10.1. Resizing Fibre Channel logical units As a system administrator, you can resize Fibre Channel logical units. Procedure Determine which devices are paths for a multipath logical unit: Re-scan Fibre Channel logical units on a system that uses multipathing: Additional resources multipath(8) man page on your system 10.2. Determining the link loss behavior of device using Fibre Channel If a driver implements the Transport dev_loss_tmo callback, access attempts to a device through a link will be blocked when a transport problem is detected. Procedure Determine the state of a remote port: This command returns one of the following output: Blocked when the remote port along with devices accessed through it are blocked. Online if the remote port is operating normally If the problem is not resolved within dev_loss_tmo seconds, the rport and devices will be unblocked. All I/O running on that device along with any new I/O sent to that device will fail. When a link loss exceeds dev_loss_tmo , the scsi_device and sd_N_ devices are removed. Typically, the Fibre Channel class will leave the device as is, that is /dev/ sdx will remain /dev/ sdx . This is because the target binding is saved by the Fibre Channel driver and when the target port returns, the SCSI addresses are recreated faithfully. However, this cannot be guaranteed, the sdx device will be restored only if no additional change on in-storage box configuration of LUNs is made. Additional resources multipath.conf(5) man page on your system Recommended tuning at scsi,multipath and at application layer while configuring Oracle RAC cluster (Red Hat Knowledgebase) 10.3. Fibre Channel configuration files The following is the list of configuration files in the /sys/class/ directory that provide the user-space API to Fibre Channel. The items use the following variables: H Host number B Bus number T Target L Logical unit (LUNs) R Remote port number Important Consult your hardware vendor before changing any of the values described in this section, if your system is using multipath software. Transport configuration in /sys/class/fc_transport/target H : B : T / port_id 24-bit port ID/address node_name 64-bit node name port_name 64-bit port name Remote port configuration in /sys/class/fc_remote_ports/rport- H : B - R / port_id node_name port_name dev_loss_tmo Controls when the scsi device gets removed from the system. After dev_loss_tmo triggers, the scsi device is removed. In the multipath.conf file , you can set dev_loss_tmo to infinity . In Red Hat Enterprise Linux 8, if you do not set the fast_io_fail_tmo option, dev_loss_tmo is capped to 600 seconds. By default, fast_io_fail_tmo is set to 5 seconds in Red Hat Enterprise Linux 8 if the multipathd service is running; otherwise, it is set to off . fast_io_fail_tmo Specifies the number of seconds to wait before it marks a link as "bad". Once a link is marked bad, existing running I/O or any new I/O on its corresponding path fails. If I/O is in a blocked queue, it will not be failed until dev_loss_tmo expires and the queue is unblocked. If fast_io_fail_tmo is set to any value except off, dev_loss_tmo is uncapped. If fast_io_fail_tmo is set to off, no I/O fails until the device is removed from the system. If fast_io_fail_tmo is set to a number, I/O fails immediately when the fast_io_fail_tmo timeout triggers. Host configuration in /sys/class/fc_host/host H / port_id node_name port_name issue_lip Instructs the driver to rediscover remote ports. 10.4. DM Multipath overrides of the device timeout The recovery_tmo sysfs option controls the timeout for a particular iSCSI device. The following options globally override the recovery_tmo values: The replacement_timeout configuration option globally overrides the recovery_tmo value for all iSCSI devices. For all iSCSI devices that are managed by DM Multipath, the fast_io_fail_tmo option in DM Multipath globally overrides the recovery_tmo value. The fast_io_fail_tmo option in DM Multipath also overrides the fast_io_fail_tmo option in Fibre Channel devices. The DM Multipath fast_io_fail_tmo option takes precedence over replacement_timeout . Red Hat does not recommend using replacement_timeout to override recovery_tmo in devices managed by DM Multipath because DM Multipath always resets recovery_tmo , when the multipathd service reloads.
[ "multipath -ll", "echo 1 > /sys/block/ sdX /device/rescan", "cat /sys/class/fc_remote_ports/rport- host : bus : remote-port /port_state" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/using-fibre-channel-devices_managing-storage-devices
Chapter 40. JMS - IBM MQ Kamelet Source
Chapter 40. JMS - IBM MQ Kamelet Source A Kamelet that can read events from an IBM MQ message queue using JMS. 40.1. Configuration Options The following table summarizes the configuration options available for the jms-ibm-mq-source Kamelet: Property Name Description Type Default Example channel * IBM MQ Channel Name of the IBM MQ Channel string destinationName * Destination Name The destination name string password * Password Password to authenticate to IBM MQ server string queueManager * IBM MQ Queue Manager Name of the IBM MQ Queue Manager string serverName * IBM MQ Server name IBM MQ Server name or address string serverPort * IBM MQ Server Port IBM MQ Server port integer 1414 username * Username Username to authenticate to IBM MQ server string clientId IBM MQ Client ID Name of the IBM MQ Client ID string destinationType Destination Type The JMS destination type (queue or topic) string "queue" Note Fields marked with an asterisk (*) are mandatory. 40.2. Dependencies At runtime, the jms-ibm-mq-source Kamelet relies upon the presence of the following dependencies: camel:jms camel:kamelet mvn:com.ibm.mq:com.ibm.mq.allclient:9.2.5.0 40.3. Usage This section describes how you can use the jms-ibm-mq-source . 40.3.1. Knative Source You can use the jms-ibm-mq-source Kamelet as a Knative source by binding it to a Knative object. jms-ibm-mq-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-ibm-mq-source properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 40.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 40.3.1.2. Procedure for using the cluster CLI Save the jms-ibm-mq-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f jms-ibm-mq-source-binding.yaml 40.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 40.3.2. Kafka Source You can use the jms-ibm-mq-source Kamelet as a Kafka source by binding it to a Kafka topic. jms-ibm-mq-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-ibm-mq-source properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 40.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 40.3.2.2. Procedure for using the cluster CLI Save the jms-ibm-mq-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f jms-ibm-mq-source-binding.yaml 40.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 40.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jms-ibm-mq-source.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-ibm-mq-source properties: serverName: \"10.103.41.245\" serverPort: \"1414\" destinationType: \"queue\" destinationName: \"DEV.QUEUE.1\" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f jms-ibm-mq-source-binding.yaml", "kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-ibm-mq-source properties: serverName: \"10.103.41.245\" serverPort: \"1414\" destinationType: \"queue\" destinationName: \"DEV.QUEUE.1\" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f jms-ibm-mq-source-binding.yaml", "kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/jms-ibm-mq-source
Chapter 6. ResilientStorage
Chapter 6. ResilientStorage The following table lists all the packages in the Resilient Storage add-on. For more information about core packages, see the Scope of Coverage Details document. Package Core Package? License ccs Yes GPLv2 clufter-cli No GPLv2+ clufter-lib-ccs No GPLv2+ clufter-lib-general No GPLv2+ clufter-lib-pcs No GPLv2+ cluster-cim Yes GPLv2 cluster-glue No GPLv2+ and LGPLv2+ cluster-glue-libs No GPLv2+ and LGPLv2+ cluster-glue-libs-devel Yes GPLv2+ and LGPLv2+ cluster-snmp Yes GPLv2 clusterlib No GPLv2+ and LGPLv2+ clusterlib-devel Yes GPLv2+ and LGPLv2+ cman Yes GPLv2+ and LGPLv2+ cmirror Yes GPLv2 corosync No BSD corosynclib No BSD corosynclib-devel Yes BSD ctdb Yes GPLv3+ ctdb-devel Yes GPLv3+ ctdb-tests No GPLv3+ fence-virt No GPLv2+ fence-virtd-checkpoint Yes GPLv2+ foghorn Yes GPLv2+ gfs2-utils Yes GPLv2+ and LGPLv2+ libesmtp-devel Yes LGPLv2+ and GPLv2+ libqb No LGPLv2+ libqb-devel No LGPLv2+ luci Yes GPLv2 and MIT lvm2-cluster Yes GPLv2 modcluster No GPLv2 omping Yes ISC openais No BSD openaislib No BSD openaislib-devel Yes BSD pacemaker Yes GPLv2+ and LGPLv2+ pacemaker-cli No GPLv2+ and LGPLv2+ pacemaker-cluster-libs No GPLv2+ and LGPLv2+ pacemaker-cts No GPLv2+ and LGPLv2+ pacemaker-doc Yes GPLv2+ and LGPLv2+ pacemaker-libs No GPLv2+ and LGPLv2+ pacemaker-libs-devel Yes GPLv2+ and LGPLv2+ pacemaker-remote No GPLv2+ and LGPLv2+ pcs Yes GPLv2 python-clufter No GPLv2+ and GFDL python-repoze-what-plugins-sql No BSD python-repoze-what-quickstart Yes BSD python-repoze-who-friendlyform No BSD python-repoze-who-plugins-sa No BSD python-tw-forms No MIT and LGPLv2+ resource-agents Yes GPLv2+ and LGPLv2+ Package Core Package? License resource-agents-sap-hana No GPLv2+ rgmanager Yes GPLv2+ and LGPLv2+ ricci No GPLv2 sbd Yes GPLv2+
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/package_manifest/chap-resilientstorage
Chapter 3. CSINode [storage.k8s.io/v1]
Chapter 3. CSINode [storage.k8s.io/v1] Description CSINode holds information about all CSI drivers installed on a node. CSI drivers do not need to create the CSINode object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration. CSINode has the same name as a node. If the object is missing, it means either there are no CSI Drivers available on the node, or the Kubelet version is low enough that it doesn't create this object. CSINode has an OwnerReference that points to the corresponding node object. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. metadata.name must be the Kubernetes node name. spec object CSINodeSpec holds information about the specification of all CSI drivers installed on a node 3.1.1. .spec Description CSINodeSpec holds information about the specification of all CSI drivers installed on a node Type object Required drivers Property Type Description drivers array drivers is a list of information of all CSI Drivers existing on a node. If all drivers in the list are uninstalled, this can become empty. drivers[] object CSINodeDriver holds information about the specification of one CSI driver installed on a node 3.1.2. .spec.drivers Description drivers is a list of information of all CSI Drivers existing on a node. If all drivers in the list are uninstalled, this can become empty. Type array 3.1.3. .spec.drivers[] Description CSINodeDriver holds information about the specification of one CSI driver installed on a node Type object Required name nodeID Property Type Description allocatable object VolumeNodeResources is a set of resource limits for scheduling of volumes. name string name represents the name of the CSI driver that this object refers to. This MUST be the same name returned by the CSI GetPluginName() call for that driver. nodeID string nodeID of the node from the driver point of view. This field enables Kubernetes to communicate with storage systems that do not share the same nomenclature for nodes. For example, Kubernetes may refer to a given node as "node1", but the storage system may refer to the same node as "nodeA". When Kubernetes issues a command to the storage system to attach a volume to a specific node, it can use this field to refer to the node name using the ID that the storage system will understand, e.g. "nodeA" instead of "node1". This field is required. topologyKeys array (string) topologyKeys is the list of keys supported by the driver. When a driver is initialized on a cluster, it provides a set of topology keys that it understands (e.g. "company.com/zone", "company.com/region"). When a driver is initialized on a node, it provides the same topology keys along with values. Kubelet will expose these topology keys as labels on its own node object. When Kubernetes does topology aware provisioning, it can use this list to determine which labels it should retrieve from the node object and pass back to the driver. It is possible for different nodes to use different topology keys. This can be empty if driver does not support topology. 3.1.4. .spec.drivers[].allocatable Description VolumeNodeResources is a set of resource limits for scheduling of volumes. Type object Property Type Description count integer count indicates the maximum number of unique volumes managed by the CSI driver that can be used on a node. A volume that is both attached and mounted on a node is considered to be used once, not twice. The same rule applies for a unique volume that is shared among multiple pods on the same node. If this field is not specified, then the supported number of volumes on this node is unbounded. 3.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csinodes DELETE : delete collection of CSINode GET : list or watch objects of kind CSINode POST : create a CSINode /apis/storage.k8s.io/v1/watch/csinodes GET : watch individual changes to a list of CSINode. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/csinodes/{name} DELETE : delete a CSINode GET : read the specified CSINode PATCH : partially update the specified CSINode PUT : replace the specified CSINode /apis/storage.k8s.io/v1/watch/csinodes/{name} GET : watch changes to an object of kind CSINode. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/storage.k8s.io/v1/csinodes Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CSINode Table 3.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.3. Body parameters Parameter Type Description body DeleteOptions schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSINode Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK CSINodeList schema 401 - Unauthorized Empty HTTP method POST Description create a CSINode Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.8. Body parameters Parameter Type Description body CSINode schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 201 - Created CSINode schema 202 - Accepted CSINode schema 401 - Unauthorized Empty 3.2.2. /apis/storage.k8s.io/v1/watch/csinodes Table 3.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CSINode. deprecated: use the 'watch' parameter with a list operation instead. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/storage.k8s.io/v1/csinodes/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the CSINode Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CSINode Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 202 - Accepted CSINode schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSINode Table 3.17. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSINode Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.19. Body parameters Parameter Type Description body Patch schema Table 3.20. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 201 - Created CSINode schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSINode Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. Body parameters Parameter Type Description body CSINode schema Table 3.23. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 201 - Created CSINode schema 401 - Unauthorized Empty 3.2.4. /apis/storage.k8s.io/v1/watch/csinodes/{name} Table 3.24. Global path parameters Parameter Type Description name string name of the CSINode Table 3.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CSINode. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage_apis/csinode-storage-k8s-io-v1
Chapter 1. Preparing to install on IBM Power
Chapter 1. Preparing to install on IBM Power 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Choosing a method to install OpenShift Container Platform on IBM Power You can install a cluster on IBM Power(R) infrastructure that you provision, by using one of the following methods: Installing a cluster on IBM Power(R) : You can install OpenShift Container Platform on IBM Power(R) infrastructure that you provision. Installing a cluster on IBM Power(R) in a restricted network : You can install OpenShift Container Platform on IBM Power(R) infrastructure that you provision in a restricted or disconnected network, by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content.
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_power/preparing-to-install-on-ibm-power
Chapter 2. Installing .NET 8.0
Chapter 2. Installing .NET 8.0 .NET 8.0 is included in the AppStream repositories for RHEL 9. The AppStream repositories are enabled by default on RHEL 9 systems. You can install the .NET 8.0 runtime with the latest 8.0 Software Development Kit (SDK). When a newer SDK becomes available for .NET 8.0, you can install it by running sudo yum install . Prerequisites Installed and registered RHEL 9.3 with attached subscriptions. For more information, see Performing a standard RHEL 9 installation . Procedure Install .NET 8.0 and all of its dependencies: Verification steps Verify the installation: The output returns the relevant information about the .NET installation and the environment.
[ "sudo yum install dotnet-sdk-8.0 -y", "dotnet --info" ]
https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_9/installing-dotnet_getting-started-with-dotnet-on-rhel-9
Chapter 11. Red Hat Process Automation Manager roles and users
Chapter 11. Red Hat Process Automation Manager roles and users To access Business Central or KIE Server, you must create users and assign them appropriate roles before the servers are started. You can create users and roles when you install Business Central or KIE Server. If both Business Central and KIE Server are running on a single instance, a user who is authenticated for Business Central can also access KIE Server. However, if Business Central and KIE Server are running on different instances, a user who is authenticated for Business Central must be authenticated separately to access KIE Server. For example, if a user who is authenticated on Business Central but not authenticated on KIE Server tries to view or manage process definitions in Business Central, a 401 error is logged in the log file and the Invalid credentials to load data from remote server. Contact your system administrator. message appears in Business Central. This section describes Red Hat Process Automation Manager user roles. Note The admin , analyst , developer , manager , process-admin , user , and rest-all roles are reserved for Business Central. The kie-server role is reserved for KIE Server. For this reason, the available roles can differ depending on whether Business Central, KIE Server, or both are installed. admin : Users with the admin role are the Business Central administrators. They can manage users and create, clone, and manage repositories. They have full access to make required changes in the application. Users with the admin role have access to all areas within Red Hat Process Automation Manager. analyst : Users with the analyst role have access to all high-level features. They can model and execute their projects. However, these users cannot add contributors to spaces or delete spaces in the Design Projects view. Access to the Deploy Execution Servers view, which is intended for administrators, is not available to users with the analyst role. However, the Deploy button is available to these users when they access the Library perspective. developer : Users with the developer role have access to almost all features and can manage rules, models, process flows, forms, and dashboards. They can manage the asset repository, they can create, build, and deploy projects. Only certain administrative functions such as creating and cloning a new repository are hidden from users with the developer role. manager : Users with the manager role can view reports. These users are usually interested in statistics about the business processes and their performance, business indicators, and other business-related reporting. A user with this role has access only to process and task reports. process-admin : Users with the process-admin role are business process administrators. They have full access to business processes, business tasks, and execution errors. These users can also view business reports and have access to the Task Inbox list. user : Users with the user role can work on the Task Inbox list, which contains business tasks that are part of currently running processes. Users with this role can view process and task reports and manage processes. rest-all : Users with the rest-all role can access Business Central REST capabilities. kie-server : Users with the kie-server role can access KIE Server REST capabilities. This role is mandatory for users to have access to Manage and Track views in Business Central.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/roles-users-con_install-on-eap
Chapter 4. Kamelets reference
Chapter 4. Kamelets reference 4.1. Kamelet structure A Kamelet is typically coded in the YAML domain-specific language. The file name prefix is the name of the Kamelet. For example, a Kamelet with the name FTP sink has the filename ftp-sink.kamelet.yaml . Note that in OpenShift, a Kamelet is a resource that shows the name of the Kamelet (not the filename). At a high level, a Kamelet resource describes: A metadata section containing the ID of the Kamelet and other information, such as the type of Kamelet ( source , sink , or action ). A definition (JSON-schema specification) that contains a set of parameters that you can use to configure the Kamelet. An optional types section containing information about input and output expected by the Kamelet. A Camel template in YAML DSL that defines the implementation of the Kamelet. The following diagram shows an example of a Kamelet and its parts. Example Kamelet structure The Kamelet ID - Use this ID in Camel K integrations when you want to reference the Kamelet. Annotations, such as icon, provide display features for the Kamelet. Labels allow a user to query Kamelets (for example, by kind: "source", "sink", or "action") Description of the Kamelet and parameters in JSON-schema specification format. The media type of the output (can include a schema). The route template that defines the behavior of the Kamelet. 4.2. Example source Kamelet Here is the content of the example coffee-source Kamelet:
[ "telegram-text-source.kamelet.yaml apiVersion: camel.apache.org/v1alpha1 kind: Kamelet metadata: name: telegram-source 1 annotations: 2 camel.apache.org/catalog.version: \"master-SNAPSHOT\" camel.apache.org/kamelet.icon: \"data:image/...\" camel.apache.org/provider: \"Red Hat\" camel.apache.org/kamelet.group: \"Telegram\" labels: 3 camel.apache.org/kamelet.type: \"source\" spec: definition: 4 title: \"Telegram Source\" description: |- Receive all messages that people send to your telegram bot. To create a bot, contact the @botfather account using the Telegram app. The source attaches the following headers to the messages: - chat-id / ce-chatid : the ID of the chat where the message comes from required: - authorizationToken type: object properties: authorizationToken: title: Token description: The token to access your bot on Telegram, that you can obtain from the Telegram \"Bot Father\". type: string format: password x-descriptors: - urn:alm:descriptor:com.tectonic.ui:password types: 5 out: mediaType: application/json dependencies: - \"camel:jackson\" - \"camel:kamelet\" - \"camel:telegram\" template: 6 from: uri: telegram:bots parameters: authorizationToken: \"{{authorizationToken}}\" steps: - set-header: name: chat-id simple: \"USD{header[CamelTelegramChatId]}\" - set-header: name: ce-chatid simple: \"USD{header[CamelTelegramChatId]}\" - marshal: json: {} - to: \"kamelet:sink\"", "apiVersion: camel.apache.org/v1alpha1 kind: Kamelet metadata: name: coffee-source labels: camel.apache.org/kamelet.type: \"source\" spec: definition: title: \"Coffee Source\" description: \"Retrieve a random coffee from a catalog of coffees\" properties: period: title: Period description: The interval between two events in seconds type: integer default: 1000 types: out: mediaType: application/json template: from: uri: timer:tick parameters: period: \"{{period}}\" steps: - to: \"https://random-data-api.com/api/coffee/random_coffee\" - to: \"kamelet:sink\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/integrating_applications_with_kamelets/kamelets-reference
8.135. man-pages-overrides
8.135. man-pages-overrides 8.135.1. RHBA-2014:1382 - man-pages-overrides bug fix update Updated man-pages-overrides package that fixes numerous bugs is now available for Red Hat Enterprise Linux 6. The man-pages-overrides package provides a collection of manual (man) pages to complement other packages or update those contained therein. Bug Fixes BZ# 1003511 The "-d", "-G", and "-U" options were removed from the rpc.idmapd command but were still presented in the rpc.idmapd(8) manual page. With this update, these unsupported options have been removed from the rpc.idmapd(8) manual page. BZ# 889049 Previously, an incorrect path to the implicit configuration file was presented in the vhostmd(8) manual page. The vhostmd(8) manual page has been updated to mention the correct path, "/etc/vhostmd/vhostmd.conf". BZ# 1099275 Previously, the mailx(1) manual page contained incomplete information about unsetting environment variables, which could confuse the user. This update adds the complete information to the mailx(1) manual page. BZ# 1087503 Prior to this update, the information in the nl_langinfo(3) and charsets(7) manual pages was incomplete. The nl_langinfo(3) manual page has been updated to state that the code set for the en_US language defaults to Latin1. Additionally, a note has been added to the charsets(7) manual page that the recommended encoding in all settings and locales is UTF-8. BZ# 1078319 Previously, the core(5) manual page contained an incorrect default value for the coredump_filter value and incomplete descriptions of all bits. The core(5) manual page has been updated to include correct and complete information. BZ# 1112708 The bash(1) manual page did not mention that 512-byte blocks are used for the "-c" and "-f" options in POSIX mode. This update adds the missing piece of information to bash(1). BZ# 1075152 Previously, the xinetd(8) manual page contained incomplete information about what happens to services during the xinetd deamon reload. This update adds a paragraph about termination handling during the xinetd deamon reload to the xinetd(8) manual page. BZ# 1058100 Prior to this update, an incorrect default value for the "persistent" service was documented in the nscd.conf(5) manual page. The nscd.conf(5) manual page has been updated to state that the default value for the "persistent" service is "yes". BZ# 969502 Previously, the rpm(8) manual page did not clearly state that the "--setperms" and "--setugids" options were mutually exclusive. The rpm(8) manual page has been updated to contain complete information. BZ# 1114785 The host.conf(5) resolver library configuration manual page contained an incorrect default value for the "multi" value. The host.conf(5) manual page has been updated to state that the default value for "multi" is "on". BZ# 1066537 Previously, the zsh(1) manual page contained incomplete description concerning emulation mode of the Z shell. This update adds the letter "b" to the list of possible first letters invoking emulation. BZ# 1007865 Previously, the snmp_read(3) manual page was unavailable in Red Hat Enterprise Linux 6. This update adds the missing snmp_read(3) manual page. BZ# 781499 Prior to this update, the description of the "-l" option was missing in the makedeltarpm(8) manual page. This update adds the missing description to makedeltarpm(8). BZ# 1108028 Previously, the ciphers(1) manual page did not describe the following Elliptic Curve Cryptography (ECC) cipher suite groups: Elliptic Curve Diffie-Hellman (ECDH) and Elliptic Curve Digital Signature Algorithm (ECDSA), or Transport Layer Security (TLS) version 1.2 specific features. This update adds the missing description of the ECDH and ECDSA cipher groups and TLSv1.2 features to ciphers(1), and the documentation is now complete. BZ# 964160 Previously, no manual pages for the cracklib-packer and cracklib-unpacker utilities were available. This update adds the cracklib-format(8) manual page, which describes cracklib-packer and cracklib-unpacker. BZ# 809096 The pkcs_slot utility was removed from the opencryptoki package but the manual page was still available. With this update, the pkcs_slot(1) man page has been removed. BZ# 1058793 Previously, the curl(1) and curl_easy_setopt(3) manual pages contained a link to the complete list of Network Security Services (NSS) ciphers that led to a non-existent page. This update adds the correct link to the curl(1) and curl_easy_setopt(3) manual pages. BZ# 988713 Previously, the "--rsyncable" option was not documented in the gzip(1) manual page. This update adds the description of "--rsyncable" and the documentation of the gzip utility is now complete. BZ# 1059828 Previously, the manual pages for the pthread_mutex utility were not available in Red Hat Enterprise Linux 6. This update adds the pthread_mutex_consistent(3), pthread_mutexattr_getrobust(3), and pthread_mutexattr_setrobust(3) manual pages. BZ# 1058738 The nscd.conf(5) manual page did not include information about netgroup caching. This update adds the description of netgroup caching to nscd.conf(5). BZ# 1075233 Previously, the pcregrep(1) manual page did not mention the pcresyntax(3) manual page. With this update, a note about pcresyntax(3) has been added to the description and the "See Also" section in the pcregrep(1) manual page. BZ# 818780 Previously, a manual page about configuring the oddjobd-mkhomedir utility was unavailable in Red Hat Enterprise Linux 6. This update adds the oddjobd-mkhomedir.conf(5) manual page. BZ# 816252 Prior to this update, a number of manual pages in Russian language were unreadable due to a redundant re-encoding. This bug has been fixed, no re-encoding is performed as the source pages are provided in the UTF-8 encoding, and the manual pages are now correctly readable. BZ# 1058349 The explanation of the sysconf(_SC_GETGR_R_SIZE_MAX) call in the getgrnam(3) manual page has been amended to describe the function of sysconf(_SC_GETGR_R_SIZE_MAX) clearly and correctly. BZ# 1017478 The flock(2) manual page contained insufficient information about locking files over NFS. With this update, a more precise description of this topic has been added to flock(2). BZ# 1011892 Previously, the documentation for the iconv utility was incomplete. This update adds the iconv(1) manual page. BZ# 1057712 When the openssh package was updated, its man pages were overridden by the man-pages-overrides package. This update removes the ssh_config(5) manual page from man-pages-overrides. The fixed manual pages are now part of the openssh package. Users of man-pages-overrides are advised to upgrade to this updated package, which fixes these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/man-pages-overrides
Part VI. Designing a decision service using guided rules
Part VI. Designing a decision service using guided rules As a business analyst or business rules developer, you can define business rules using the guided rules designer in Business Central. These guided rules are compiled into Drools Rule Language (DRL) and form the core of the decision service for your project. Note You can also design your decision service using Decision Model and Notation (DMN) models instead of rule-based or table-based assets. For information about DMN support in Red Hat Process Automation Manager 7.13, see the following resources: Getting started with decision services (step-by-step tutorial with a DMN decision service example) Designing a decision service using DMN models (overview of DMN support and capabilities in Red Hat Process Automation Manager) Prerequisites The space and project for the guided rules have been created in Business Central. Each asset is associated with a project assigned to a space. For details, see Getting started with decision services .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/assembly-guided-rules
1.2. LVM Architecture Overview
1.2. LVM Architecture Overview For the RHEL 4 release of the Linux operating system, the original LVM1 logical volume manager was replaced by LVM2, which has a more generic kernel framework than LVM1. LVM2 provides the following improvements over LVM1: flexible capacity more efficient metadata storage better recovery format new ASCII metadata format atomic changes to metadata redundant copies of metadata LVM2 is backwards compatible with LVM1, with the exception of snapshot and cluster support. You can convert a volume group from LVM1 format to LVM2 format with the vgconvert command. For information on converting LVM metadata format, see the vgconvert (8) man page. The underlying physical storage unit of an LVM logical volume is a block device such as a partition or whole disk. This device is initialized as an LVM physical volume (PV). To create an LVM logical volume, the physical volumes are combined into a volume group (VG). This creates a pool of disk space out of which LVM logical volumes (LVs) can be allocated. This process is analogous to the way in which disks are divided into partitions. A logical volume is used by file systems and applications (such as databases). Figure 1.1, "LVM Logical Volume Components" shows the components of a simple LVM logical volume: Figure 1.1. LVM Logical Volume Components For detailed information on the components of an LVM logical volume, see Chapter 2, LVM Components .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/LVM_definition
Chapter 4. Configuring power monitoring
Chapter 4. Configuring power monitoring Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Kepler resource is a Kubernetes custom resource definition (CRD) that enables you to configure the deployment and monitor the status of the Kepler resource. 4.1. The Kepler configuration You can configure Kepler with the spec field of the Kepler resource. Important Ensure that the name of your Kepler instance is kepler . All other instances are rejected by the Power monitoring Operator Webhook. The following is the list of configuration options: Table 4.1. Kepler configuration options Name Spec Description Default port exporter.deployment The port on the node where the Prometheus metrics are exposed. 9103 nodeSelector exporter.deployment The nodes on which Kepler exporter pods are scheduled. kubernetes.io/os: linux tolerations exporter.deployment The tolerations for Kepler exporter that allow the pods to be scheduled on nodes with specific characteristics. - operator: "Exists" Example Kepler resource with default configuration apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler spec: exporter: deployment: port: 9103 1 nodeSelector: kubernetes.io/os: linux 2 Tolerations: 3 - key: "" operator: "Exists" value: "" effect: "" 1 The Prometheus metrics are exposed on port 9103. 2 Kepler pods are scheduled on Linux nodes. 3 The default tolerations allow Kepler to be scheduled on any node. 4.2. Monitoring the Kepler status You can monitor the state of the Kepler exporter with the status field of the Kepler resource. The status.exporter field includes information, such as the following: The number of nodes currently running the Kepler pods The number of nodes that should be running the Kepler pods Conditions representing the health of the Kepler resource This provides you with valuable insights into the changes made through the spec field. Example state of the Kepler resource apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler status: exporter: conditions: 1 - lastTransitionTime: '2024-01-11T11:07:39Z' message: Reconcile succeeded observedGeneration: 1 reason: ReconcileSuccess status: 'True' type: Reconciled - lastTransitionTime: '2024-01-11T11:07:39Z' message: >- Kepler daemonset "kepler-operator/kepler" is deployed to all nodes and available; ready 2/2 observedGeneration: 1 reason: DaemonSetReady status: 'True' type: Available currentNumberScheduled: 2 2 desiredNumberScheduled: 2 3 1 The health of the Kepler resource. In this example, Kepler is successfully reconciled and ready. 2 The number of nodes currently running the Kepler pods is 2. 3 The wanted number of nodes to run the Kepler pods is 2. 4.3. Configuring Kepler to use Redfish You can configure Kepler to use Redfish as the source for running or hosting containers. Kepler can then monitor the power usage of these containers. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. You have installed the Power monitoring Operator. Procedure In the Administrator perspective of the web console, click Operators Installed Operators . Click Power monitoring for Red Hat OpenShift from the Installed Operators list and click the Kepler tab. Click Create Kepler . If you already have a Kepler instance created, click Edit Kepler . Configure .spec.exporter.redfish of the Kepler instance by specifying the mandatory secretRef field. You can also configure the optional probeInterval and skipSSLVerify fields to meet your needs. Example Kepler instance apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler spec: exporter: deployment: # ... redfish: secretRef: <secret_name> required 1 probeInterval: 60s 2 skipSSLVerify: false 3 # ... 1 Required: Specifies the name of the secret that contains the credentials for accessing the Redfish server. 2 Optional: Controls the frequency at which the power information is queried from Redfish. The default value is 60s . 3 Optional: Controls if Kepler skips verifying the Redfish server certificate. The default value is false . Note After Kepler is deployed, the openshift-power-monitoring namespace is created. Create the redfish.csv file with the following data format: <your_kubelet_node_name>,<redfish_username>,<redfish_password>,https://<redfish_ip_or_hostname>/ Example redfish.csv file control-plane,exampleuser,examplepass,https://redfish.nodes.example.com worker-1,exampleuser,examplepass,https://redfish.nodes.example.com worker-2,exampleuser,examplepass,https://another.redfish.nodes.example.com Create the secret under the openshift-power-monitoring namespace. You must create the secret with the following conditions: The secret type is Opaque . The credentials are stored under the redfish.csv key in the data field of the secret. USD oc -n openshift-power-monitoring \ create secret generic redfish-secret \ --from-file=redfish.csv Example output apiVersion: v1 kind: Secret metadata: name: redfish-secret data: redfish.csv: YmFyCg== # ... Important The Kepler deployment will not continue until the Redfish secret is created. You can find this information in the status of a Kepler instance.
[ "apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler spec: exporter: deployment: port: 9103 1 nodeSelector: kubernetes.io/os: linux 2 Tolerations: 3 - key: \"\" operator: \"Exists\" value: \"\" effect: \"\"", "apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler status: exporter: conditions: 1 - lastTransitionTime: '2024-01-11T11:07:39Z' message: Reconcile succeeded observedGeneration: 1 reason: ReconcileSuccess status: 'True' type: Reconciled - lastTransitionTime: '2024-01-11T11:07:39Z' message: >- Kepler daemonset \"kepler-operator/kepler\" is deployed to all nodes and available; ready 2/2 observedGeneration: 1 reason: DaemonSetReady status: 'True' type: Available currentNumberScheduled: 2 2 desiredNumberScheduled: 2 3", "apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler spec: exporter: deployment: redfish: secretRef: <secret_name> required 1 probeInterval: 60s 2 skipSSLVerify: false 3", "<your_kubelet_node_name>,<redfish_username>,<redfish_password>,https://<redfish_ip_or_hostname>/", "control-plane,exampleuser,examplepass,https://redfish.nodes.example.com worker-1,exampleuser,examplepass,https://redfish.nodes.example.com worker-2,exampleuser,examplepass,https://another.redfish.nodes.example.com", "oc -n openshift-power-monitoring create secret generic redfish-secret --from-file=redfish.csv", "apiVersion: v1 kind: Secret metadata: name: redfish-secret data: redfish.csv: YmFyCg== #" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/power_monitoring/configuring-power-monitoring
Chapter 5. Decision environments
Chapter 5. Decision environments Decision environments are container images that run Ansible rulebooks. They create a common language for communicating automation dependencies, and give a standard way to build and distribute the automation environment. You can find the default decision environment in the Ansible-Rulebook . To create your own decision environment, see Installing ansible-builder and Building a custom decision environment for Event-Driven Ansible within Ansible Automation Platform . 5.1. Installing ansible-builder To build images, you must have Podman or Docker installed, along with the ansible-builder Python package. The --container-runtime option must correspond to the Podman or Docker executable you intend to use. When building a decision environment image, it must support the architecture that Ansible Automation Platform is deployed with. For more information, see Quickstart for Ansible Builder or Creating and using execution environments . 5.2. Building a custom decision environment for Event-Driven Ansible Decision Environments are execution environments tailored towards running Ansible Rulebooks. Similar to execution environments that run Ansible playbooks for automation controller, decision environments are designed to run rulebooks for Event-Driven Ansible controller. You can create a custom decision environment for Event-Driven Ansible that provides a custom maintained or third-party event source plugin that is not available in the default decision environment. Prerequisites Ansible Automation Platform > = 2.5 Event-Driven Ansible Ansible Builder > = 3.0 Procedure Use de-minimal as the base image with Ansible Builder to build your custom decision environments. This image is built from a base image provided by Red Hat at Ansible Automation Platform minimal decision environment . Important Use the correct Event-Driven Ansible controller decision environment in Ansible Automation Platform to prevent rulebook activation failure. If you want to connect Event-Driven Ansible controller to Ansible Automation Platform 2.4, you must use registry.redhat.io/ansible-automation-platform-24/de-minimal-rhel9:latest If you want to connect Event-Driven Ansible controller to Ansible Automation Platform 2.5, you must use registry.redhat.io/ansible-automation-platform-25/de-minimal-rhel9:latest The following is an example of the Ansible Builder definition file that uses de-minimal as a base image to build a custom decision environment with the ansible.eda collection: version: 3 images: base_image: name: 'registry.redhat.io/ansible-automation-platform-25/de-minimal-rhel9:latest' dependencies: galaxy: collections: - ansible.eda python_interpreter: package_system: "python39" options: package_manager_path: /usr/bin/microdnf Additionally, if you need other Python packages or RPMs, you can add the following to a single definition file: version: 3 images: base_image: name: 'registry.redhat.io/ansible-automation-platform-25/de-minimal-rhel9:latest' dependencies: galaxy: collections: - ansible.eda python: - six - psutil system: - iputils [platform:rpm] python_interpreter: package_system: "python39" options: package_manager_path: /usr/bin/microdnf 5.3. Setting up a new decision environment You can import a decision environment into your Event-Driven Ansible controller using a default or custom decision environment. Prerequisites You have set up a credential, if necessary. For more information, see the Setting up credentials section. You have pushed a decision environment image to an image repository or you chose to use the de-minimal image located in registry.redhat.io . Procedure Log in to Ansible Automation Platform. Navigate to Automation Decisions Decision Environments . Click Create decision environment . Insert the following: Name Insert the name. Description This field is optional. Organization Select an organization to associate with the decision environment. Image This is the full image location, including the container registry, image name, and version tag. Credential This field is optional. This is the credential needed to use the decision environment image. Select Create decision environment . Your decision environment is now created and can be managed on the Decision Environments page. After saving the new decision environment, the decision environment's details page is displayed. From there or the Decision Environments list view, you can edit or delete it.
[ "version: 3 images: base_image: name: 'registry.redhat.io/ansible-automation-platform-25/de-minimal-rhel9:latest' dependencies: galaxy: collections: - ansible.eda python_interpreter: package_system: \"python39\" options: package_manager_path: /usr/bin/microdnf", "version: 3 images: base_image: name: 'registry.redhat.io/ansible-automation-platform-25/de-minimal-rhel9:latest' dependencies: galaxy: collections: - ansible.eda python: - six - psutil system: - iputils [platform:rpm] python_interpreter: package_system: \"python39\" options: package_manager_path: /usr/bin/microdnf" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_decisions/eda-decision-environments
probe::udp.disconnect.return
probe::udp.disconnect.return Name probe::udp.disconnect.return - UDP has been disconnected successfully Synopsis Values ret Error code (0: no error) name The name of this probe Context The process which requested a UDP disconnection
[ "udp.disconnect.return" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-udp-disconnect-return
19.2. XML Representation of a Domain Resource
19.2. XML Representation of a Domain Resource Example 19.1. An XML representation of a domain resource
[ "<domain id=\"77696e32-6b38-7268-6576-2e656e676c61\" href=\"/ovirt-engine/api/domains/77696e32-6b38-7268-6576-2e656e676c61\"> <name>domain.example.com</name> <link rel=\"users\" href=\"/ovirt-engine/api/domains/77696e32-6b38-7268-6576-2e656e676c61/users\"/> <link rel=\"groups\" href=\"/ovirt-engine/api/domains/77696e32-6b38-7268-6576-2e656e676c61/groups\"/> <link rel=\"users/search\" href=\"/ovirt-engine/api/domains/77696e32-6b38-7268-6576-2e656e676c61/ users?search={query}\"/> <link rel=\"groups/search\" href=\"/ovirt-engine/api/domains/77696e32-6b38-7268-6576-2e656e676c61/ groups?search={query}\"/> </domain>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/xml_representation_of_a_domain_resource
Pipelines as Code
Pipelines as Code Red Hat OpenShift Pipelines 1.15 Configuring and using Pipelines as Code Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/pipelines_as_code/index
Using the AMQ Python Client
Using the AMQ Python Client Red Hat AMQ 2021.Q3 For Use with AMQ Clients 2.10
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_python_client/index
12.3. Adding Host Entries
12.3. Adding Host Entries 12.3.1. Adding Host Entries from the Web UI Open the Identity tab, and select the Hosts subtab. Click Add at the top of the hosts list. Figure 12.1. Adding Host Entries Fill in the machine name and select the domain from the configured zones in the drop-down list. If the host has already been assigned a static IP address, then include that with the host entry so that the DNS entry is fully created. Optionally, to add an extra value to the host for some use cases, use the Class field. Semantics placed on this attribute are for local interpretation. Figure 12.2. Add Host Wizard DNS zones can be created in IdM, which is described in Section 33.4.1, "Adding and Removing Master DNS Zones" . If the IdM server does not manage the DNS server, the zone can be entered manually in the menu area, like a regular text field. Note Select the Force check box if you want to skip checking whether the host is resolvable via DNS. Click the Add and Edit button to go directly to the expanded entry page and fill in more attribute information. Information about the host hardware and physical location can be included with the host entry. Figure 12.3. Expanded Entry Page 12.3.2. Adding Host Entries from the Command Line Host entries are created using the host-add command. This commands adds the host entry to the IdM Directory Server. The full list of options with host-add are listed in the ipa host manpage. At its most basic, an add operation only requires the client host name to add the client to the Kerberos realm and to create an entry in the IdM LDAP server: If the IdM server is configured to manage DNS, then the host can also be added to the DNS resource records using the --ip-address and --force options. Example 12.1. Creating Host Entries with Static IP Addresses Commonly, hosts may not have a static IP address or the IP address may not be known at the time the client is configured. For example, laptops may be preconfigured as Identity Management clients, but they do not have IP addresses at the time they are configured. Hosts which use DHCP can still be configured with a DNS entry by using --force . This essentially creates a placeholder entry in the IdM DNS service. When the DNS service dynamically updates its records, the host's current IP address is detected and its DNS record is updated. Example 12.2. Creating Host Entries with DHCP Host records are deleted using the host-del command. If the IdM domain uses DNS, then the --updatedns option also removes the associated records of any kind for the host from the DNS.
[ "ipa host-add client1.example.com", "ipa host-add --force --ip-address=192.168.166.31 client1.example.com", "ipa host-add --force client1.example.com", "ipa host-del --updatedns client1.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/adding-host-entry
Chapter 5. Contexts and Dependency Injection (CDI) in Camel Quarkus
Chapter 5. Contexts and Dependency Injection (CDI) in Camel Quarkus CDI plays a central role in Quarkus and Camel Quarkus offers a first class support for it too. You may use @Inject , @ConfigProperty and similar annotations e.g. to inject beans and configuration values to your Camel RouteBuilder , for example: import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import org.apache.camel.builder.RouteBuilder; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped 1 public class TimerRoute extends RouteBuilder { @ConfigProperty(name = "timer.period", defaultValue = "1000") 2 String period; @Inject Counter counter; @Override public void configure() throws Exception { fromF("timer:foo?period=%s", period) .setBody(exchange -> "Incremented the counter: " + counter.increment()) .to("log:cdi-example?showExchangePattern=false&showBodyType=false"); } } 1 The @ApplicationScoped annotation is required for @Inject and @ConfigProperty to work in a RouteBuilder . Note that the @ApplicationScoped beans are managed by the CDI container and their life cycle is thus a bit more complex than the one of the plain RouteBuilder . In other words, using @ApplicationScoped in RouteBuilder comes with some boot time penalty and you should therefore only annotate your RouteBuilder with @ApplicationScoped when you really need it. 2 The value for the timer.period property is defined in src/main/resources/application.properties of the example project. Tip Refer to the Quarkus Dependency Injection guide for more details. 5.1. Accessing CamelContext To access CamelContext just inject it into your bean: import jakarta.inject.Inject; import jakarta.enterprise.context.ApplicationScoped; import java.util.stream.Collectors; import java.util.List; import org.apache.camel.CamelContext; @ApplicationScoped public class MyBean { @Inject CamelContext context; public List<String> listRouteIds() { return context.getRoutes().stream().map(Route::getId).sorted().collect(Collectors.toList()); } } 5.2. @EndpointInject and @Produce If you are used to @org.apache.camel.EndpointInject and @org.apache.camel.Produce from plain Camel or from Camel on SpringBoot, you can continue using them on Quarkus too. The following use cases are supported by org.apache.camel.quarkus:camel-quarkus-core : import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.EndpointInject; import org.apache.camel.FluentProducerTemplate; import org.apache.camel.Produce; import org.apache.camel.ProducerTemplate; @ApplicationScoped class MyBean { @EndpointInject("direct:myDirect1") ProducerTemplate producerTemplate; @EndpointInject("direct:myDirect2") FluentProducerTemplate fluentProducerTemplate; @EndpointInject("direct:myDirect3") DirectEndpoint directEndpoint; @Produce("direct:myDirect4") ProducerTemplate produceProducer; @Produce("direct:myDirect5") FluentProducerTemplate produceProducerFluent; } You can use any other Camel producer endpoint URI instead of direct:myDirect* . Warning @EndpointInject and @Produce are not supported on setter methods - see #2579 The following use case is supported by org.apache.camel.quarkus:camel-quarkus-bean : import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.Produce; @ApplicationScoped class MyProduceBean { public interface ProduceInterface { String sayHello(String name); } @Produce("direct:myDirect6") ProduceInterface produceInterface; void doSomething() { produceInterface.sayHello("Kermit") } } 5.3. CDI and the Camel Bean component 5.3.1. Refer to a bean by name To refer to a bean in a route definition by name, just annotate the bean with @Named("myNamedBean") and @ApplicationScoped (or some other supported scope). The @RegisterForReflection annotation is important for the native mode. import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import io.quarkus.runtime.annotations.RegisterForReflection; @ApplicationScoped @Named("myNamedBean") @RegisterForReflection public class NamedBean { public String hello(String name) { return "Hello " + name + " from the NamedBean"; } } Then you can use the myNamedBean name in a route definition: import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from("direct:named") .bean("myNamedBean", "hello"); /* ... which is an equivalent of the following: */ from("direct:named") .to("bean:myNamedBean?method=hello"); } } As an alternative to @Named , you may also use io.smallrye.common.annotation.Identifier to name and identify a bean. import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.runtime.annotations.RegisterForReflection; import io.smallrye.common.annotation.Identifier; @ApplicationScoped @Identifier("myBeanIdentifier") @RegisterForReflection public class MyBean { public String hello(String name) { return "Hello " + name + " from MyBean"; } } Then refer to the identifier value within the Camel route: import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from("direct:start") .bean("myBeanIdentifier", "Camel"); } } Note We aim at supporting all use cases listed in Bean binding section of Camel documentation. Do not hesitate to file an issue if some bean binding scenario does not work for you. 5.3.2. @Consume Since Camel Quarkus 2.0.0, the camel-quarkus-bean artifact brings support for @org.apache.camel.Consume - see the Pojo consuming section of Camel documentation. Declaring a class like the following import org.apache.camel.Consume; public class Foo { @Consume("activemq:cheese") public void onCheese(String name) { ... } } will automatically create the following Camel route from("activemq:cheese").bean("foo1234", "onCheese") for you. Note that Camel Quarkus will implicitly add @jakarta.inject.Singleton and jakarta.inject.Named("foo1234") to the bean class, where 1234 is a hash code obtained from the fully qualified class name. If your bean has some CDI scope (such as @ApplicationScoped ) or @Named("someName") set already, those will be honored in the auto-created route.
[ "import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import org.apache.camel.builder.RouteBuilder; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped 1 public class TimerRoute extends RouteBuilder { @ConfigProperty(name = \"timer.period\", defaultValue = \"1000\") 2 String period; @Inject Counter counter; @Override public void configure() throws Exception { fromF(\"timer:foo?period=%s\", period) .setBody(exchange -> \"Incremented the counter: \" + counter.increment()) .to(\"log:cdi-example?showExchangePattern=false&showBodyType=false\"); } }", "import jakarta.inject.Inject; import jakarta.enterprise.context.ApplicationScoped; import java.util.stream.Collectors; import java.util.List; import org.apache.camel.CamelContext; @ApplicationScoped public class MyBean { @Inject CamelContext context; public List<String> listRouteIds() { return context.getRoutes().stream().map(Route::getId).sorted().collect(Collectors.toList()); } }", "import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.EndpointInject; import org.apache.camel.FluentProducerTemplate; import org.apache.camel.Produce; import org.apache.camel.ProducerTemplate; @ApplicationScoped class MyBean { @EndpointInject(\"direct:myDirect1\") ProducerTemplate producerTemplate; @EndpointInject(\"direct:myDirect2\") FluentProducerTemplate fluentProducerTemplate; @EndpointInject(\"direct:myDirect3\") DirectEndpoint directEndpoint; @Produce(\"direct:myDirect4\") ProducerTemplate produceProducer; @Produce(\"direct:myDirect5\") FluentProducerTemplate produceProducerFluent; }", "import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.Produce; @ApplicationScoped class MyProduceBean { public interface ProduceInterface { String sayHello(String name); } @Produce(\"direct:myDirect6\") ProduceInterface produceInterface; void doSomething() { produceInterface.sayHello(\"Kermit\") } }", "import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import io.quarkus.runtime.annotations.RegisterForReflection; @ApplicationScoped @Named(\"myNamedBean\") @RegisterForReflection public class NamedBean { public String hello(String name) { return \"Hello \" + name + \" from the NamedBean\"; } }", "import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from(\"direct:named\") .bean(\"myNamedBean\", \"hello\"); /* ... which is an equivalent of the following: */ from(\"direct:named\") .to(\"bean:myNamedBean?method=hello\"); } }", "import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.runtime.annotations.RegisterForReflection; import io.smallrye.common.annotation.Identifier; @ApplicationScoped @Identifier(\"myBeanIdentifier\") @RegisterForReflection public class MyBean { public String hello(String name) { return \"Hello \" + name + \" from MyBean\"; } }", "import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from(\"direct:start\") .bean(\"myBeanIdentifier\", \"Camel\"); } }", "import org.apache.camel.Consume; public class Foo { @Consume(\"activemq:cheese\") public void onCheese(String name) { } }", "from(\"activemq:cheese\").bean(\"foo1234\", \"onCheese\")" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/camel-quarkus-extensions-cdi
A.5. Troubleshooting with Serial Consoles
A.5. Troubleshooting with Serial Consoles Linux kernels can output information to serial ports. This is useful for debugging kernel panics and hardware issues with video devices or headless servers. To enable serial console output for KVM guests: Ensure that the domain XML file of the guest includes configuration for the serial console. For example: <console type='pty'> <source path='/dev/pts/16'/> <target type='virtio' port='1'/> <alias name='console1'/> </console> On the guest, follow the How can I enable serial console for Red Hat Enterprise Linux 7? article on Red Hat Knowledgebase. On the host, you can then access the serial console with the following command, where guestname is the name of the guest virtual machine: You can also use virt-manager to display the virtual text console. In the guest console window, select Serial 1 in Text Consoles from the View menu.
[ "<console type='pty'> <source path='/dev/pts/16'/> <target type='virtio' port='1'/> <alias name='console1'/> </console>", "virsh console guestname" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-troubleshooting_with_serial_consoles
function::sock_fam_str2num
function::sock_fam_str2num Name function::sock_fam_str2num - Given a protocol family name (string), return the corresponding protocol family number Synopsis Arguments family The family name
[ "sock_fam_str2num:long(family:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sock-fam-str2num
Chapter 10. Multiple regions and zones configuration for a cluster on vSphere
Chapter 10. Multiple regions and zones configuration for a cluster on vSphere As an administrator, you can specify multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. This configuration reduces the risk of a hardware failure or network outage causing your cluster to fail. A failure domain configuration lists parameters that create a topology. The following list states some of these parameters: computeCluster datacenter datastore networks resourcePool After you define multiple regions and zones for your OpenShift Container Platform cluster, you can create or migrate nodes to another failure domain. Important If you want to migrate pre-existing OpenShift Container Platform cluster compute nodes to a failure domain, you must define a new compute machine set for the compute node. This new machine set can scale up a compute node according to the topology of the failure domain, and scale down the pre-existing compute node. The cloud provider adds topology.kubernetes.io/zone and topology.kubernetes.io/region labels to any compute node provisioned by a machine set resource. For more information, see Creating a compute machine set . 10.1. Specifying multiple regions and zones for your cluster on vSphere You can configure the infrastructures.config.openshift.io configuration resource to specify multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. Topology-aware features for the cloud controller manager and the vSphere Container Storage Interface (CSI) Operator Driver require information about the vSphere topology where you host your OpenShift Container Platform cluster. This topology information exists in the infrastructures.config.openshift.io configuration resource. Before you specify regions and zones for your cluster, you must ensure that all datacenters and compute clusters contain tags, so that the cloud provider can add labels to your node. For example, if datacenter-1 represents region-a and compute-cluster-1 represents zone-1 , the cloud provider adds an openshift-region category label with a value of region-a to datacenter-1 . Additionally, the cloud provider adds an openshift-zone category tag with a value of zone-1 to compute-cluster-1 . Note You can migrate control plane nodes with vMotion capabilities to a failure domain. After you add these nodes to a failure domain, the cloud provider adds topology.kubernetes.io/zone and topology.kubernetes.io/region labels to these nodes. Prerequisites You created the openshift-region and openshift-zone tag categories on the vCenter server. You ensured that each datacenter and compute cluster contains tags that represent the name of their associated region or zone, or both. Optional: If you defined API and Ingress static IP addresses to the installation program, you must ensure that all regions and zones share a common layer 2 network. This configuration ensures that API and Ingress Virtual IP (VIP) addresses can interact with your cluster. Important If you do not supply tags to all datacenters and compute clusters before you create a node or migrate a node, the cloud provider cannot add the topology.kubernetes.io/zone and topology.kubernetes.io/region labels to the node. This means that services cannot route traffic to your node. Procedure Edit the infrastructures.config.openshift.io custom resource definition (CRD) of your cluster to specify multiple regions and zones in the failureDomains section of the resource by running the following command: USD oc edit infrastructures.config.openshift.io cluster Example infrastructures.config.openshift.io CRD for a instance named cluster with multiple regions and zones defined in its configuration spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: vSphere vsphere: vcenters: - datacenters: - <region_a_datacenter> - <region_b_datacenter> port: 443 server: <your_vcenter_server> failureDomains: - name: <failure_domain_1> region: <region_a> zone: <zone_a> server: <your_vcenter_server> topology: datacenter: <region_a_dc> computeCluster: "</region_a_dc/host/zone_a_cluster>" resourcePool: "</region_a_dc/host/zone_a_cluster/Resources/resource_pool>" datastore: "</region_a_dc/datastore/datastore_a>" networks: - port-group - name: <failure_domain_2> region: <region_a> zone: <zone_b> server: <your_vcenter_server> topology: computeCluster: </region_a_dc/host/zone_b_cluster> datacenter: <region_a_dc> datastore: </region_a_dc/datastore/datastore_a> networks: - port-group - name: <failure_domain_3> region: <region_b> zone: <zone_a> server: <your_vcenter_server> topology: computeCluster: </region_b_dc/host/zone_a_cluster> datacenter: <region_b_dc> datastore: </region_b_dc/datastore/datastore_b> networks: - port-group nodeNetworking: external: {} internal: {} Important After you create a failure domain and you define it in a CRD for a VMware vSphere cluster, you must not modify or delete the failure domain. Doing any of these actions with this configuration can impact the availability and fault tolerance of a control plane machine. Save the resource file to apply the changes. Additional resources Parameters for the cluster-wide infrastructure CRD 10.2. Enabling a multiple layer 2 network for your cluster You can configure your cluster to use a multiple layer 2 network configuration so that data transfer among nodes can span across multiple networks. Prerequisites You configured network connectivity among machines so that cluster components can communicate with each other. Procedure If you installed your cluster with installer-provisioned infrastructure, you must ensure that all control plane nodes share a common layer 2 network. Additionally, ensure compute nodes that are configured for Ingress pod scheduling share a common layer 2 network. If you need compute nodes to span multiple layer 2 networks, you can create infrastructure nodes that can host Ingress pods. If you need to provision workloads across additional layer 2 networks, you can create compute machine sets on vSphere and then move these workloads to your target layer 2 networks. If you installed your cluster on infrastructure that you provided, which is defined as a user-provisioned infrastructure, complete the following actions to meet your needs: Configure your API load balancer and network so that the load balancer can reach the API and Machine Config Server on the control plane nodes. Configure your Ingress load balancer and network so that the load balancer can reach the Ingress pods on the compute or infrastructure nodes. Additional resources Installing a cluster on vSphere with network customizations Creating infrastructure machine sets for production environments Creating a compute machine set 10.3. Parameters for the cluster-wide infrastructure CRD You must set values for specific parameters in the cluster-wide infrastructure, infrastructures.config.openshift.io , Custom Resource Definition (CRD) to define multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. The following table lists mandatory parameters for defining multiple regions and zones for your OpenShift Container Platform cluster: Parameter Description vcenters The vCenter server for your OpenShift Container Platform cluster. You can only specify one vCenter for your cluster. datacenters vCenter datacenters where VMs associated with the OpenShift Container Platform cluster will be created or presently exist. port The TCP port of the vCenter server. server The fully qualified domain name (FQDN) of the vCenter server. failureDomains The list of failure domains. name The name of the failure domain. region The value of the openshift-region tag assigned to the topology for the failure failure domain. zone The value of the openshift-zone tag assigned to the topology for the failure failure domain. topology The vCenter reources associated with the failure domain. datacenter The datacenter associated with the failure domain. computeCluster The full path of the compute cluster associated with the failure domain. resourcePool The full path of the resource pool associated with the failure domain. datastore The full path of the datastore associated with the failure domain. networks A list of port groups associated with the failure domain. Only one portgroup may be defined. Additional resources Specifying multiple regions and zones for your cluster on vSphere
[ "oc edit infrastructures.config.openshift.io cluster", "spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: vSphere vsphere: vcenters: - datacenters: - <region_a_datacenter> - <region_b_datacenter> port: 443 server: <your_vcenter_server> failureDomains: - name: <failure_domain_1> region: <region_a> zone: <zone_a> server: <your_vcenter_server> topology: datacenter: <region_a_dc> computeCluster: \"</region_a_dc/host/zone_a_cluster>\" resourcePool: \"</region_a_dc/host/zone_a_cluster/Resources/resource_pool>\" datastore: \"</region_a_dc/datastore/datastore_a>\" networks: - port-group - name: <failure_domain_2> region: <region_a> zone: <zone_b> server: <your_vcenter_server> topology: computeCluster: </region_a_dc/host/zone_b_cluster> datacenter: <region_a_dc> datastore: </region_a_dc/datastore/datastore_a> networks: - port-group - name: <failure_domain_3> region: <region_b> zone: <zone_a> server: <your_vcenter_server> topology: computeCluster: </region_b_dc/host/zone_a_cluster> datacenter: <region_b_dc> datastore: </region_b_dc/datastore/datastore_b> networks: - port-group nodeNetworking: external: {} internal: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_vsphere/post-install-vsphere-zones-regions-configuration
Chapter 3. ProjectRequest [project.openshift.io/v1]
Chapter 3. ProjectRequest [project.openshift.io/v1] Description ProjectRequest is the set of options necessary to fully qualify a project request Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources description string Description is the description to apply to a project displayName string DisplayName is the display name to apply to a project kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta 3.2. API endpoints The following API endpoints are available: /apis/project.openshift.io/v1/projectrequests GET : list objects of kind ProjectRequest POST : create a ProjectRequest 3.2.1. /apis/project.openshift.io/v1/projectrequests Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description list objects of kind ProjectRequest Table 3.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method POST Description create a ProjectRequest Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body ProjectRequest schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK ProjectRequest schema 201 - Created ProjectRequest schema 202 - Accepted ProjectRequest schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/project_apis/projectrequest-project-openshift-io-v1
6.3. Enabling vhost-net zero-copy
6.3. Enabling vhost-net zero-copy In Red Hat Enterprise Linux 7, vhost-net zero-copy is disabled by default. To enable this action on a permanent basis, add a new file vhost-net.conf to /etc/modprobe.d with the following content: If you want to disable this again, you can run the following: The first command removes the old file, the second one makes a new file (like above) and disables zero-copy. You can use this to enable as well but the change will not be permanent. To confirm that this has taken effect, check the output of cat /sys/module/vhost_net/parameters/experimental_zcopytx . It should show:
[ "options vhost_net experimental_zcopytx=1", "modprobe -r vhost_net", "modprobe vhost_net experimental_zcopytx=0", "cat /sys/module/vhost_net/parameters/experimental_zcopytx 0" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-network_configuration-enabling_vhost_net_zero_copy
Chapter 4. Configuring Kafka
Chapter 4. Configuring Kafka Kafka uses a properties file to store static configuration. The recommended location for the configuration file is /opt/kafka/config/server.properties . The configuration file has to be readable by the kafka user. AMQ Streams ships an example configuration file that highlights various basic and advanced features of the product. It can be found under config/server.properties in the AMQ Streams installation directory. This chapter explains the most important configuration options. For a complete list of supported Kafka broker configuration options, see Appendix A, Broker configuration parameters . 4.1. ZooKeeper Kafka brokers need ZooKeeper to store some parts of their configuration as well as to coordinate the cluster (for example to decide which node is a leader for which partition). Connection details for the ZooKeeper cluster are stored in the configuration file. The field zookeeper.connect contains a comma-separated list of hostnames and ports of members of the zookeeper cluster. For example: zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 Kafka will use these addresses to connect to the ZooKeeper cluster. With this configuration, all Kafka znodes will be created directly in the root of ZooKeeper database. Therefore, such a ZooKeeper cluster could be used only for a single Kafka cluster. To configure multiple Kafka clusters to use single ZooKeeper cluster, specify a base (prefix) path at the end of the ZooKeeper connection string in the Kafka configuration file: zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181/my-cluster-1 4.2. Listeners Kafka brokers can be configured to use multiple listeners. Each listener can be used to listen on a different port or network interface and can have different configuration. Listeners are configured in the listeners property in the configuration file. The listeners property contains a list of listeners with each listener configured as <listenerName> :// <hostname> :_<port>_ . When the hostname value is empty, Kafka will use java.net.InetAddress.getCanonicalHostName() as hostname. The following example shows how multiple listeners might be configured: When a Kafka client wants to connect to a Kafka cluster, it first connects to a bootstrap server . The bootstrap server is one of the cluster nodes. It will provide the client with a list of all other brokers which are part of the cluster and the client will connect to them individually. By default the bootstrap server will provide the client with a list of nodes based on the listeners field. Advertised listeners It is possible to give the client a different set of addresses than given in the listeners property. It is useful in situations when additional network infrastructure, such as a proxy, is between the client and the broker, or when an external DNS name should be used instead of an IP address. Here, the broker allows defining the advertised addresses of the listeners in the advertised.listeners configuration property. This property has the same format as the listeners property. The following example shows how to configure advertised listeners: Note The names of the listeners have to match the names of the listeners from the listeners property. Inter-broker listeners When the cluster has replicated topics, the brokers responsible for such topics need to communicate with each other in order to replicate the messages in those topics. When multiple listeners are configured, the configuration field inter.broker.listener.name can be used to specify the name of the listener which should be used for replication between brokers. For example: 4.3. Commit logs Apache Kafka stores all records it receives from producers in commit logs. The commit logs contain the actual data, in the form of records, that Kafka needs to deliver. These are not the application log files which record what the broker is doing. Log directories You can configure log directories using the log.dirs property file to store commit logs in one or multiple log directories. It should be set to /var/lib/kafka directory created during installation: For performance reasons, you can configure log.dirs to multiple directories and place each of them on a different physical device to improve disk I/O performance. For example: 4.4. Broker ID Broker ID is a unique identifier for each broker in the cluster. You can assign an integer greater than or equal to 0 as broker ID. The broker ID is used to identify the brokers after restarts or crashes and it is therefore important that the id is stable and does not change over time. The broker ID is configured in the broker properties file: 4.5. Running a multi-node Kafka cluster This procedure describes how to configure and run Kafka as a multi-node cluster. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. A ZooKeeper cluster is configured and running . Running the cluster For each Kafka broker in your AMQ Streams cluster: Edit the /opt/kafka/config/server.properties Kafka configuration file as follows: Set the broker.id field to 0 for the first broker, 1 for the second broker, and so on. Configure the details for connecting to ZooKeeper in the zookeeper.connect option. Configure the Kafka listeners. Set the directories where the commit logs should be stored in the logs.dir directory. Here we see an example configuration for a Kafka broker: broker.id=0 zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=REPLICATION://:9091,PLAINTEXT://:9092 inter.broker.listener.name=REPLICATION log.dirs=/var/lib/kafka In a typical installation where each Kafka broker is running on identical hardware, only the broker.id configuration property will differ between each broker config. Start the Kafka broker with the default configuration file. su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties Verify that the Kafka broker is running. jcmd | grep Kafka Verifying the brokers Once all nodes of the clusters are up and running, verify that all nodes are members of the Kafka cluster by sending a dump command to one of the ZooKeeper nodes using the ncat utility. The command prints all Kafka brokers registered in ZooKeeper. Use ncat stat to check the node status. echo dump | ncat zoo1.my-domain.com 2181 The output should contain all Kafka brokers you just configured and started. Example output from the ncat command for Kafka cluster with 3 nodes: SessionTracker dump: org.apache.zookeeper.server.quorum.LearnerSessionTracker@28848ab9 ephemeral nodes dump: Sessions with Ephemerals (3): 0x20000015dd00000: /brokers/ids/1 0x10000015dc70000: /controller /brokers/ids/0 0x10000015dc70001: /brokers/ids/2 Additional resources For more information about installing AMQ Streams, see Section 2.3, "Installing AMQ Streams" . For more information about configuring AMQ Streams, see Section 2.8, "Configuring AMQ Streams" . For more information about running a ZooKeeper cluster, see Section 3.3, "Running multi-node ZooKeeper cluster" . For a complete list of supported Kafka broker configuration options, see Appendix A, Broker configuration parameters . 4.6. ZooKeeper authentication By default, connections between ZooKeeper and Kafka are not authenticated. However, Kafka and ZooKeeper support Java Authentication and Authorization Service (JAAS) which can be used to set up authentication using Simple Authentication and Security Layer (SASL). ZooKeeper supports authentication using the DIGEST-MD5 SASL mechanism with locally stored credentials. 4.6.1. JAAS Configuration SASL authentication for ZooKeeper connections has to be configured in the JAAS configuration file. By default, Kafka will use the JAAS context named Client for connecting to ZooKeeper. The Client context should be configured in the /opt/kafka/config/jass.conf file. The context has to enable the PLAIN SASL authentication, as in the following example: 4.6.2. Enabling ZooKeeper authentication This procedure describes how to enable authentication using the SASL DIGEST-MD5 mechanism when connecting to ZooKeeper. Prerequisites Client-to-server authentication is enabled in ZooKeeper Enabling SASL DIGEST-MD5 authentication On all Kafka broker nodes, create or edit the /opt/kafka/config/jaas.conf JAAS configuration file and add the following context: The username and password should be the same as configured in ZooKeeper. Following example shows the Client context: Restart all Kafka broker nodes one by one. To pass the JAAS configuration to Kafka brokers, use the KAFKA_OPTS environment variable. Additional resources For more information about configuring client-to-server authentication in ZooKeeper, see Section 3.4, "Authentication" . 4.7. Authorization Authorization in Kafka brokers is implemented using authorizer plugins. In this section we describe how to use the AclAuthorizer plugin provided with Kafka. Alternatively, you can use your own authorization plugins. For example, if you are using OAuth 2.0 token-based authentication , you can use OAuth 2.0 authorization . 4.7.1. Simple ACL authorizer Authorizer plugins, including AclAuthorizer , are enabled through the authorizer.class.name property: authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer A fully-qualified name is required for the chosen authorizer. For AclAuthorizer , the fully-qualified name is kafka.security.auth.SimpleAclAuthorizer . 4.7.1.1. ACL rules AclAuthorizer uses ACL rules to manage access to Kafka brokers. ACL rules are defined in the format: Principal P is allowed / denied operation O on Kafka resource R from host H For example, a rule might be set so that user: John can view the topic comments from host 127.0.0.1 Host is the IP address of the machine that John is connecting from. In most cases, the user is a producer or consumer application: Consumer01 can write to the consumer group accounts from host 127.0.0.1 If ACL rules are not present If ACL rules are not present for a given resource, all actions are denied. This behavior can be changed by setting the property allow.everyone.if.no.acl.found to true in the Kafka configuration file /opt/kafka/config/server.properties . 4.7.1.2. Principals A principal represents the identity of a user. The format of the ID depends on the authentication mechanism used by clients to connect to Kafka: User:ANONYMOUS when connected without authentication. User:<username> when connected using simple authentication mechanisms, such as PLAIN or SCRAM. For example User:admin or User:user1 . User:<DistinguishedName> when connected using TLS client authentication. For example User:CN=user1,O=MyCompany,L=Prague,C=CZ . User:<Kerberos username> when connected using Kerberos. The DistinguishedName is the distinguished name from the client certificate. The Kerberos username is the primary part of the Kerberos principal, which is used by default when connecting using Kerberos. You can use the sasl.kerberos.principal.to.local.rules property to configure how the Kafka principal is built from the Kerberos principal. 4.7.1.3. Authentication of users To use authorization, you need to have authentication enabled and used by your clients. Otherwise, all connections will have the principal User:ANONYMOUS . For more information on methods of authentication, see Encryption and authentication . 4.7.1.4. Super users Super users are allowed to take all actions regardless of the ACL rules. Super users are defined in the Kafka configuration file using the property super.users . For example: 4.7.1.5. Replica broker authentication When authorization is enabled, it is applied to all listeners and all connections. This includes the inter-broker connections used for replication of data between brokers. If enabling authorization, therefore, ensure that you use authentication for inter-broker connections and give the users used by the brokers sufficient rights. For example, if authentication between brokers uses the kafka-broker user, then super user configuration must include the username super.users=User:kafka-broker . 4.7.1.6. Supported resources You can apply Kafka ACLs to these types of resource: Topics Consumer groups The cluster TransactionId DelegationToken 4.7.1.7. Supported operations AclAuthorizer authorizes operations on resources. Fields with X in the following table mark the supported operations for each resource. Table 4.1. Supported operations for a resource Topics Consumer Groups Cluster Read X X Write X Create X Delete X Alter X Describe X X X ClusterAction X All X X X 4.7.1.8. ACL management options ACL rules are managed using the bin/kafka-acls.sh utility, which is provided as part of the Kafka distribution package. Use kafka-acls.sh parameter options to add, list and remove ACL rules, and perform other functions. The parameters require a double-hyphen convention, such as --add . Option Type Description Default add Action Add ACL rule. remove Action Remove ACL rule. list Action List ACL rules. authorizer Action Fully-qualified class name of the authorizer. kafka.security.auth.SimpleAclAuthorizer authorizer-properties Configuration Key/value pairs passed to the authorizer for initialization. For AclAuthorizer , the example values are: zookeeper.connect=zoo1.my-domain.com:2181 . bootstrap-server Resource Host/port pairs to connect to the Kafka cluster. Use this option or the authorizer option, not both. command-config Resource Configuration property file to pass to the Admin Client, which is used in conjunction with the bootstrap-server parameter. cluster Resource Specifies a cluster as an ACL resource. topic Resource Specifies a topic name as an ACL resource. An asterisk ( * ) used as a wildcard translates to all topics . A single command can specify multiple --topic options. group Resource Specifies a consumer group name as an ACL resource. A single command can specify multiple --group options. transactional-id Resource Specifies a transactional ID as an ACL resource. Transactional delivery means that all messages sent by a producer to multiple partitions must be successfully delivered or none of them. An asterisk ( * ) used as a wildcard translates to all IDs . delegation-token Resource Specifies a delegation token as an ACL resource. An asterisk ( * ) used as a wildcard translates to all tokens . resource-pattern-type Configuration Specifies a type of resource pattern for the add parameter or a resource pattern filter value for the list or remove parameters. Use literal or prefixed as the resource pattern type for a resource name. Use any or match as resource pattern filter values, or a specific pattern type filter. literal allow-principal Principal Principal added to an allow ACL rule. A single command can specify multiple --allow-principal options. deny-principal Principal Principal added to a deny ACL rule. A single command can specify multiple --deny-principal options. principal Principal Principal name used with the list parameter to return a list of ACLs for the principal. A single command can specify multiple --principal options. allow-host Host IP address that allows access to the principals listed in --allow-principal . Hostnames or CIDR ranges are not supported. If --allow-principal is specified, defaults to * meaning "all hosts". deny-host Host IP address that denies access to the principals listed in --deny-principal . Hostnames or CIDR ranges are not supported. if --deny-principal is specified, defaults to * meaning "all hosts". operation Operation Allows or denies an operation. A single command can specify multipleMultiple --operation options can be specified in single command. All producer Shortcut A shortcut to allow or deny all operations needed by a message producer (WRITE and DESCRIBE on topic, CREATE on cluster). consumer Shortcut A shortcut to allow or deny all operations needed by a message consumer (READ and DESCRIBE on topic, READ on consumer group). idempotent Shortcut A shortcut to enable idempotence when used with the --producer parameter, so that messages are delivered exactly once to a partition. Idepmotence is enabled automatically if the producer is authorized to send messages based on a specific transactional ID. force Shortcut A shortcut to accept all queries and do not prompt. 4.7.2. Enabling authorization This procedure describes how to enable the AclAuthorizer plugin for authorization in Kafka brokers. Prerequisites AMQ Streams is installed on all hosts used as Kafka brokers. Procedure Edit the /opt/kafka/config/server.properties Kafka configuration file to use the AclAuthorizer . (Re)start the Kafka brokers. Additional resources For more information about configuring AMQ Streams, see Section 2.8, "Configuring AMQ Streams" . For more information about running a Kafka cluster, see Section 4.5, "Running a multi-node Kafka cluster" . 4.7.3. Adding ACL rules AclAuthorizer uses Access Control Lists (ACLs), which define a set of rules describing what users can and cannot do. This procedure describes how to add ACL rules when using the AclAuthorizer plugin in Kafka brokers. Rules are added using the kafka-acls.sh utility and stored in ZooKeeper. Prerequisites AMQ Streams is installed on all hosts used as Kafka brokers. Authorization is enabled in Kafka brokers. Procedure Run kafka-acls.sh with the --add option. Examples: Allow user1 and user2 access to read from myTopic using the MyConsumerGroup consumer group. bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --add --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --add --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --add --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2 Deny user1 access to read myTopic from IP address host 127.0.0.1 . bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --add --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1 Add user1 as the consumer of myTopic with MyConsumerGroup . bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --add --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1 Additional resources For a list of all kafka-acls.sh options, see Section 4.7.1, "Simple ACL authorizer" . 4.7.4. Listing ACL rules This procedure describes how to list existing ACL rules when using the AclAuthorizer plugin in Kafka brokers. Rules are listed using the kafka-acls.sh utility. Prerequisites AMQ Streams is installed on all hosts used as Kafka brokers. Authorization is enabled in Kafka brokers ACLs have been added . Procedure Run kafka-acls.sh with the --list option. For example: USD bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --list --topic myTopic Current ACLs for resource `Topic:myTopic`: User:user1 has Allow permission for operations: Read from hosts: * User:user2 has Allow permission for operations: Read from hosts: * User:user2 has Deny permission for operations: Read from hosts: 127.0.0.1 User:user1 has Allow permission for operations: Describe from hosts: * User:user2 has Allow permission for operations: Describe from hosts: * User:user2 has Deny permission for operations: Describe from hosts: 127.0.0.1 Additional resources For a list of all kafka-acls.sh options, see Section 4.7.1, "Simple ACL authorizer" . 4.7.5. Removing ACL rules This procedure describes how to remove ACL rules when using the AclAuthorizer plugin in Kafka brokers. Rules are removed using the kafka-acls.sh utility. Prerequisites AMQ Streams is installed on all hosts used as Kafka brokers. Authorization is enabled in Kafka brokers. ACLs have been added . Procedure Run kafka-acls.sh with the --remove option. Examples: Remove the ACL allowing Allow user1 and user2 access to read from myTopic using the MyConsumerGroup consumer group. bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --remove --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --remove --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --remove --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2 Remove the ACL adding user1 as the consumer of myTopic with MyConsumerGroup . bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --remove --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1 Remove the ACL denying user1 access to read myTopic from IP address host 127.0.0.1 . bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --remove --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1 Additional resources For a list of all kafka-acls.sh options, see Section 4.7.1, "Simple ACL authorizer" . For more information about enabling authorization, see Section 4.7.2, "Enabling authorization" . 4.8. ZooKeeper authorization When authentication is enabled between Kafka and ZooKeeper, you can use ZooKeeper Access Control List (ACL) rules to automatically control access to Kafka's metadata stored in ZooKeeper. 4.8.1. ACL Configuration Enforcement of ZooKeeper ACL rules is controlled by the zookeeper.set.acl property in the config/server.properties Kafka configuration file. The property is disabled by default and enabled by setting to true : If ACL rules are enabled, when a znode is created in ZooKeeper only the Kafka user who created it can modify or delete it. All other users have read-only access. Kafka sets ACL rules only for newly created ZooKeeper znodes . If the ACLs are only enabled after the first start of the cluster, the zookeeper-security-migration.sh tool can set ACLs on all existing znodes . Confidentiality of data in ZooKeeper Data stored in ZooKeeper includes: Topic names and their configuration Salted and hashed user credentials when SASL SCRAM authentication is used. But ZooKeeper does not store any records sent and received using Kafka. The data stored in ZooKeeper is assumed to be non-confidential. If the data is to be regarded as confidential (for example because topic names contain customer IDs), the only option available for protection is isolating ZooKeeper on the network level and allowing access only to Kafka brokers. 4.8.2. Enabling ZooKeeper ACLs for a new Kafka cluster This procedure describes how to enable ZooKeeper ACLs in Kafka configuration for a new Kafka cluster. Use this procedure only before the first start of the Kafka cluster. For enabling ZooKeeper ACLs in a cluster that is already running, see Section 4.8.3, "Enabling ZooKeeper ACLs in an existing Kafka cluster" . Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. ZooKeeper cluster is configured and running . Client-to-server authentication is enabled in ZooKeeper. ZooKeeper authentication is enabled in the Kafka brokers. Kafka brokers have not yet been started. Procedure Edit the /opt/kafka/config/server.properties Kafka configuration file to set the zookeeper.set.acl field to true on all cluster nodes. Start the Kafka brokers. 4.8.3. Enabling ZooKeeper ACLs in an existing Kafka cluster This procedure describes how to enable ZooKeeper ACLs in Kafka configuration for a Kafka cluster that is running. Use the zookeeper-security-migration.sh tool to set ZooKeeper ACLs on all existing znodes . The zookeeper-security-migration.sh is available as part of AMQ Streams, and can be found in the bin directory. Prerequisites Kafka cluster is configured and running . Enabling the ZooKeeper ACLs Edit the /opt/kafka/config/server.properties Kafka configuration file to set the zookeeper.set.acl field to true on all cluster nodes. Restart all Kafka brokers one by one. Set the ACLs on all existing ZooKeeper znodes using the zookeeper-security-migration.sh tool. For example: 4.9. Encryption and authentication AMQ Streams supports encryption and authentication, which is configured as part of the listener configuration. 4.9.1. Listener configuration Encryption and authentication in Kafka brokers is configured per listener. For more information about Kafka listener configuration, see Section 4.2, "Listeners" . Each listener in the Kafka broker is configured with its own security protocol. The configuration property listener.security.protocol.map defines which listener uses which security protocol. It maps each listener name to its security protocol. Supported security protocols are: PLAINTEXT Listener without any encryption or authentication. SSL Listener using TLS encryption and, optionally, authentication using TLS client certificates. SASL_PLAINTEXT Listener without encryption but with SASL-based authentication. SASL_SSL Listener with TLS-based encryption and SASL-based authentication. Given the following listeners configuration: the listener.security.protocol.map might look like this: This would configure the listener INT1 to use unencrypted connections with SASL authentication, the listener INT2 to use encrypted connections with SASL authentication and the REPLICATION interface to use TLS encryption (possibly with TLS client authentication). The same security protocol can be used multiple times. The following example is also a valid configuration: Such a configuration would use TLS encryption and TLS authentication for all interfaces. The following chapters will explain in more detail how to configure TLS and SASL. 4.9.2. TLS Encryption Kafka supports TLS for encrypting communication with Kafka clients. In order to use TLS encryption and server authentication, a keystore containing private and public keys has to be provided. This is usually done using a file in the Java Keystore (JKS) format. A path to this file is set in the ssl.keystore.location property. The ssl.keystore.password property should be used to set the password protecting the keystore. For example: In some cases, an additional password is used to protect the private key. Any such password can be set using the ssl.key.password property. Kafka is able to use keys signed by certification authorities as well as self-signed keys. Using keys signed by certification authorities should always be the preferred method. In order to allow clients to verify the identity of the Kafka broker they are connecting to, the certificate should always contain the advertised hostname(s) as its Common Name (CN) or in the Subject Alternative Names (SAN). It is possible to use different SSL configurations for different listeners. All options starting with ssl. can be prefixed with listener.name.<NameOfTheListener>. , where the name of the listener has to be always in lower case. This will override the default SSL configuration for that specific listener. The following example shows how to use different SSL configurations for different listeners: Additional TLS configuration options In addition to the main TLS configuration options described above, Kafka supports many options for fine-tuning the TLS configuration. For example, to enable or disable TLS / SSL protocols or cipher suites: ssl.cipher.suites List of enabled cipher suites. Each cipher suite is a combination of authentication, encryption, MAC and key exchange algorithms used for the TLS connection. By default, all available cipher suites are enabled. ssl.enabled.protocols List of enabled TLS / SSL protocols. Defaults to TLSv1.2,TLSv1.1,TLSv1 . For a complete list of supported Kafka broker configuration options, see Appendix A, Broker configuration parameters . 4.9.3. Enabling TLS encryption This procedure describes how to enable encryption in Kafka brokers. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. Procedure Generate TLS certificates for all Kafka brokers in your cluster. The certificates should have their advertised and bootstrap addresses in their Common Name or Subject Alternative Name. Edit the /opt/kafka/config/server.properties Kafka configuration file on all cluster nodes for the following: Change the listener.security.protocol.map field to specify the SSL protocol for the listener where you want to use TLS encryption. Set the ssl.keystore.location option to the path to the JKS keystore with the broker certificate. Set the ssl.keystore.password option to the password you used to protect the keystore. For example: (Re)start the Kafka brokers Additional resources For more information about configuring AMQ Streams, see Section 2.8, "Configuring AMQ Streams" . For more information about running a Kafka cluster, see Section 4.5, "Running a multi-node Kafka cluster" . For more information about configuring TLS encryption in clients, see: Appendix D, Producer configuration parameters Appendix C, Consumer configuration parameters 4.9.4. Authentication For authentication, you can use: TLS client authentication based on X.509 certificates on encrypted connections A supported Kafka SASL (Simple Authentication and Security Layer) mechanism OAuth 2.0 token based authentication 4.9.4.1. TLS client authentication TLS client authentication can be used only on connections which are already using TLS encryption. To use TLS client authentication, a truststore with public keys can be provided to the broker. These keys can be used to authenticate clients connecting to the broker. The truststore should be provided in Java Keystore (JKS) format and should contain public keys of the certification authorities. All clients with public and private keys signed by one of the certification authorities included in the truststore will be authenticated. The location of the truststore is set using field ssl.truststore.location . In case the truststore is password protected, the password should be set in the ssl.truststore.password property. For example: Once the truststore is configured, TLS client authentication has to be enabled using the ssl.client.auth property. This property can be set to one of three different values: none TLS client authentication is switched off. (Default value) requested TLS client authentication is optional. Clients will be asked to authenticate using TLS client certificate but they can choose not to. required Clients are required to authenticate using TLS client certificate. When a client authenticates using TLS client authentication, the authenticated principal name is the distinguished name from the authenticated client certificate. For example, a user with a certificate which has a distinguished name CN=someuser will be authenticated with the following principal CN=someuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown . When TLS client authentication is not used and SASL is disabled, the principal name will be ANONYMOUS . 4.9.4.2. SASL authentication SASL authentication is configured using Java Authentication and Authorization Service (JAAS). JAAS is also used for authentication of connections between Kafka and ZooKeeper. JAAS uses its own configuration file. The recommended location for this file is /opt/kafka/config/jaas.conf . The file has to be readable by the kafka user. When running Kafka, the location of this file is specified using Java system property java.security.auth.login.config . This property has to be passed to Kafka when starting the broker nodes: SASL authentication is supported both through plain unencrypted connections as well as through TLS connections. SASL can be enabled individually for each listener. To enable it, the security protocol in listener.security.protocol.map has to be either SASL_PLAINTEXT or SASL_SSL . SASL authentication in Kafka supports several different mechanisms: PLAIN Implements authentication based on username and passwords. Usernames and passwords are stored locally in Kafka configuration. SCRAM-SHA-256 and SCRAM-SHA-512 Implements authentication using Salted Challenge Response Authentication Mechanism (SCRAM). SCRAM credentials are stored centrally in ZooKeeper. SCRAM can be used in situations where ZooKeeper cluster nodes are running isolated in a private network. GSSAPI Implements authentication against a Kerberos server. Warning The PLAIN mechanism sends the username and password over the network in an unencrypted format. It should be therefore only be used in combination with TLS encryption. The SASL mechanisms are configured via the JAAS configuration file. Kafka uses the JAAS context named KafkaServer . After they are configured in JAAS, the SASL mechanisms have to be enabled in the Kafka configuration. This is done using the sasl.enabled.mechanisms property. This property contains a comma-separated list of enabled mechanisms: In case the listener used for inter-broker communication is using SASL, the property sasl.mechanism.inter.broker.protocol has to be used to specify the SASL mechanism which it should use. For example: The username and password which will be used for the inter-broker communication has to be specified in the KafkaServer JAAS context using the field username and password . SASL PLAIN To use the PLAIN mechanism, the usernames and password which are allowed to connect are specified directly in the JAAS context. The following example shows the context configured for SASL PLAIN authentication. The example configures three different users: admin user1 user2 The JAAS configuration file with the user database should be kept in sync on all Kafka brokers. When SASL PLAIN is also used for inter-broker authentication, the username and password properties should be included in the JAAS context: SASL SCRAM SCRAM authentication in Kafka consists of two mechanisms: SCRAM-SHA-256 and SCRAM-SHA-512 . These mechanisms differ only in the hashing algorithm used - SHA-256 versus stronger SHA-512. To enable SCRAM authentication, the JAAS configuration file has to include the following configuration: When enabling SASL authentication in the Kafka configuration file, both SCRAM mechanisms can be listed. However, only one of them can be chosen for the inter-broker communication. For example: User credentials for the SCRAM mechanism are stored in ZooKeeper. The kafka-configs.sh tool can be used to manage them. For example, run the following command to add user user1 with password 123456: To delete a user credential use: SASL GSSAPI The SASL mechanism used for authentication using Kerberos is called GSSAPI . To configure Kerberos SASL authentication, the following configuration should be added to the JAAS configuration file: The domain name in the Kerberos principal has to be always in upper case. In addition to the JAAS configuration, the Kerberos service name needs to be specified in the sasl.kerberos.service.name property in the Kafka configuration: Multiple SASL mechanisms Kafka can use multiple SASL mechanisms at the same time. The different JAAS configurations can be all added to the same context: When multiple mechanisms are enabled, clients will be able to choose the mechanism which they want to use. 4.9.5. Enabling TLS client authentication This procedure describes how to enable TLS client authentication in Kafka brokers. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. TLS encryption is enabled . Procedure Prepare a JKS truststore containing the public key of the certification authority used to sign the user certificates. Edit the /opt/kafka/config/server.properties Kafka configuration file on all cluster nodes for the following: Set the ssl.truststore.location option to the path to the JKS truststore with the certification authority of the user certificates. Set the ssl.truststore.password option to the password you used to protect the truststore. Set the ssl.client.auth option to required . For example: (Re)start the Kafka brokers Additional resources For more information about configuring AMQ Streams, see Section 2.8, "Configuring AMQ Streams" . For more information about running a Kafka cluster, see Section 4.5, "Running a multi-node Kafka cluster" . For more information about configuring TLS encryption in clients, see: Appendix D, Producer configuration parameters Appendix C, Consumer configuration parameters 4.9.6. Enabling SASL PLAIN authentication This procedure describes how to enable SASL PLAIN authentication in Kafka brokers. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. Procedure Edit or create the /opt/kafka/config/jaas.conf JAAS configuration file. This file should contain all your users and their passwords. Make sure this file is the same on all Kafka brokers. For example: Edit the /opt/kafka/config/server.properties Kafka configuration file on all cluster nodes for the following: Change the listener.security.protocol.map field to specify the SASL_PLAINTEXT or SASL_SSL protocol for the listener where you want to use SASL PLAIN authentication. Set the sasl.enabled.mechanisms option to PLAIN . For example: (Re)start the Kafka brokers using the KAFKA_OPTS environment variable to pass the JAAS configuration to Kafka brokers. Additional resources For more information about configuring AMQ Streams, see Section 2.8, "Configuring AMQ Streams" . For more information about running a Kafka cluster, see Section 4.5, "Running a multi-node Kafka cluster" . For more information about configuring SASL PLAIN authentication in clients, see: Appendix D, Producer configuration parameters Appendix C, Consumer configuration parameters 4.9.7. Enabling SASL SCRAM authentication This procedure describes how to enable SASL SCRAM authentication in Kafka brokers. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. Procedure Edit or create the /opt/kafka/config/jaas.conf JAAS configuration file. Enable the ScramLoginModule for the KafkaServer context. Make sure this file is the same on all Kafka brokers. For example: Edit the /opt/kafka/config/server.properties Kafka configuration file on all cluster nodes for the following: Change the listener.security.protocol.map field to specify the SASL_PLAINTEXT or SASL_SSL protocol for the listener where you want to use SASL SCRAM authentication. Set the sasl.enabled.mechanisms option to SCRAM-SHA-256 or SCRAM-SHA-512 . For example: (Re)start the Kafka brokers using the KAFKA_OPTS environment variable to pass the JAAS configuration to Kafka brokers. Additional resources For more information about configuring AMQ Streams, see Section 2.8, "Configuring AMQ Streams" . For more information about running a Kafka cluster, see Section 4.5, "Running a multi-node Kafka cluster" . For more information about adding SASL SCRAM users, see Section 4.9.8, "Adding SASL SCRAM users" . For more information about deleting SASL SCRAM users, see Section 4.9.9, "Deleting SASL SCRAM users" . For more information about configuring SASL SCRAM authentication in clients, see: Appendix D, Producer configuration parameters Appendix C, Consumer configuration parameters 4.9.8. Adding SASL SCRAM users This procedure describes how to add new users for authentication using SASL SCRAM. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. SASL SCRAM authentication is enabled . Procedure Use the kafka-configs.sh tool to add new SASL SCRAM users. bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --alter --add-config 'SCRAM-SHA-512=[password= <Password> ]' --entity-type users --entity-name <Username> For example: Additional resources For more information about configuring SASL SCRAM authentication in clients, see: Appendix D, Producer configuration parameters Appendix C, Consumer configuration parameters 4.9.9. Deleting SASL SCRAM users This procedure describes how to remove users when using SASL SCRAM authentication. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. SASL SCRAM authentication is enabled . Procedure Use the kafka-configs.sh tool to delete SASL SCRAM users. bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name <Username> For example: Additional resources For more information about configuring SASL SCRAM authentication in clients, see: Appendix D, Producer configuration parameters Appendix C, Consumer configuration parameters 4.10. Using OAuth 2.0 token-based authentication AMQ Streams supports the use of OAuth 2.0 authentication using the SASL OAUTHBEARER mechanism. OAuth 2.0 enables standardized token based authentication and authorization between applications, using a central authorization server to issue tokens that grant limited access to resources. You can configure OAuth 2.0 authentication, then OAuth 2.0 authorization . OAuth 2.0 authentication can also be used in conjunction with ACL-based Kafka authorization regardless of the authorization server used. Using OAuth 2.0 token-based authentication, application clients can access resources on application servers (called resource servers ) without exposing account credentials. The application client passes an access token as a means of authenticating, which application servers can also use to determine the level of access to grant. The authorization server handles the granting of access and inquiries about access. In the context of AMQ Streams: Kafka brokers act as OAuth 2.0 resource servers Kafka clients act as OAuth 2.0 application clients Kafka clients authenticate to Kafka brokers. The brokers and clients communicate with the OAuth 2.0 authorization server, as necessary, to obtain or validate access tokens. For a deployment of AMQ Streams, OAuth 2.0 integration provides: Server-side OAuth 2.0 support for Kafka brokers Client-side OAuth 2.0 support for Kafka Mirror Maker, Kafka Connect and the Kafka Bridge Additional resources OAuth 2.0 site 4.10.1. OAuth 2.0 authentication mechanism The Kafka SASL OAUTHBEARER mechanism is used to establish authenticated sessions with a Kafka broker. A Kafka client initiates a session with the Kafka broker using the SASL OAUTHBEARER mechanism for credentials exchange, where credentials take the form of an access token. Kafka brokers and clients need to be configured to use OAuth 2.0. 4.10.1.1. Configuring OAuth 2.0 with properties or variables You can configure OAuth 2.0 settings using Java Authentication and Authorization Service (JAAS) properties or environment variables. JAAS properties are configured in the server.properties configuration file, and passed as key-values pairs of the listener.name. LISTENER-NAME .oauthbearer.sasl.jaas.config property. If using environment variables, you still need the listener.name. LISTENER-NAME .oauthbearer.sasl.jaas.config property in the server.properties file, but you can omit the other JAAS properties. You can use capitalized or upper-case environment variable naming conventions. The Kafka OAuth 2.0 library uses properties that start with oauth. to configure authentication, and properties that start with strimzi. to configure OAuth 2.0 authorization . 4.10.2. OAuth 2.0 Kafka broker configuration Kafka broker configuration for OAuth 2.0 involves: Creating the OAuth 2.0 client in the authorization server Configuring OAuth 2.0 authentication in the Kafka cluster Note In relation to the authorization server, Kafka brokers and Kafka clients are both regarded as OAuth 2.0 clients. 4.10.2.1. OAuth 2.0 client configuration on an authorization server To configure a Kafka broker to validate the token received during session initiation, the recommended approach is to create a OAuth 2.0 client definition in an authorization server, configured as confidential , with the following client credentials enabled: Client ID of kafka-broker (for example) Client ID and secret as the authentication mechanism Note You only need to use a client ID and secret when using a non-public introspection endpoint of the authorization server. The credentials are not typically required when using public authorization server endpoints, as with fast local JWT token validation. 4.10.2.2. OAuth 2.0 authentication configuration in the Kafka cluster To use OAuth 2.0 authentication in the Kafka cluster, you enable a listener configuration for your Kafka cluster in the Kafka server.properties file. A minimum configuration is required. You can also configure a TLS listener, where TLS is used for inter-broker communication. You can configure the broker for token validation by the authorization server using the: JWKS endpoint in combination with signed JWT-formatted access tokens Introspection endpoint The minimum configuration shown here applies a global listener configuration. This means that inter-broker communication goes through the same listener as application clients. To enable OAuth 2.0 configuration for a specific listener, you specify listener.name. LISTENER-NAME .sasl.enabled.mechanisms instead of sasl.enabled.mechanisms , which is shown in the listener configuration examples below. LISTENER-NAME is the name of the listener (case insensitive). In the example below, we name the listener CLIENT , so the property name will be listener.name.client.sasl.enabled.mechanisms . Minimum listener configuration for OAuth 2.0 authentication using a JWKS endpoint sasl.enabled.mechanisms=OAUTHBEARER 1 listeners=CLIENT://0.0.0.0:9092 2 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT 3 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER 4 sasl.mechanism.inter.broker.protocol=OAUTHBEARER 5 inter.broker.listener.name=CLIENT 6 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler 7 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ 8 oauth.valid.issuer.uri="https:// AUTH-SERVER-ADDRESS " \ 9 oauth.jwks.endpoint.uri="https:// AUTH-SERVER-ADDRESS /jwks" \ 10 oauth.username.claim="preferred_username" \ 11 oauth.client.id="kafka-broker" \ 12 oauth.client.secret="kafka-secret" \ 13 oauth.token.endpoint.uri="https:// AUTH-SERVER-ADDRESS /token" ; 14 listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 15 listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 16 1 Enables the OAUTHBEARER as SASL mechanism for credentials exchange over SASL. 2 Configures a listener for client applications to connect to. The system hostname is used as an advertised hostname, which clients must resolve in order to reconnect. The listener is named CLIENT in this example. 3 Specifies the channel protocol for the listener. SASL_SSL is for TLS. SASL_PLAINTEXT is used for an unencrypted connection (no TLS), but there is risk of eavesdropping and interception at the TCP connection layer. 4 Specifies OAUTHBEARER as SASL for the CLIENT listener. The client name ( CLIENT ) is usually specified in uppercase in the listeners property, and in lowercase for listener.name properties ( listener.name.client ). and in lowercase when part of a listener.name. client .* property. 5 Specifies OAUTHBEARER as SASL for inter-broker communication. 6 Specifies the listener for inter-broker communication. The specification is required for the configuration to be valid. 7 Configures OAuth 2.0 authentication on the client listener. 8 Configures authentication settings for client and inter-broker communication. The oauth.client.id , oauth.client.secret , and auth.token.endpoint.uri properties relate to inter-broker configuration. 9 A valid issuer URI. Only access tokens issued by this issuer will be accepted. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME . 10 The JWKS endpoint URL. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/certs . 11 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. 12 Client ID of the Kafka broker, which is the same for all brokers. This is the client registered with the authorization server as kafka-broker . 13 Secret for the Kafka broker, which is the same for all brokers. 14 The OAuth 2.0 token endpoint URL to your authorization server. For production, always use HTTPs. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token . 15 Enables (and is only required for) OAuth 2.0 authentication for inter-broker communication. 16 (Optional) Enforces session expiry when token expires, and also activates the Kafka re-authentication mechanism . If the specified value is less than the time left for the access token to expire, then the client will have to re-authenticate before the actual token expiry. By default, the session does not expire when the access token expires, and the client does not attempt re-authentication. TLS listener configuration for OAuth 2.0 authentication sasl.enabled.mechanisms= listeners=REPLICATION://kafka:9091,CLIENT://kafka:9092 1 listener.security.protocol.map=REPLICATION:SSL,CLIENT:SASL_PLAINTEXT 2 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER inter.broker.listener.name=REPLICATION listener.name.replication.ssl.keystore.password= KEYSTORE-PASSWORD 3 listener.name.replication.ssl.truststore.password= TRUSTSTORE-PASSWORD listener.name.replication.ssl.keystore.type=JKS listener.name.replication.ssl.truststore.type=JKS listener.name.replication.ssl.endpoint.identification.algorithm=HTTPS 4 listener.name.replication.ssl.secure.random.implementation=SHA1PRNG 5 listener.name.replication.ssl.keystore.location= PATH-TO-KEYSTORE 6 listener.name.replication.ssl.truststore.location= PATH-TO-TRUSTSTORE 7 listener.name.replication.ssl.client.auth=required 8 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.valid.issuer.uri="https:// AUTH-SERVER-ADDRESS " \ oauth.jwks.endpoint.uri="https:// AUTH-SERVER-ADDRESS /jwks" \ oauth.username.claim="preferred_username" ; 9 1 Separate configurations are required for inter-broker communication and client applications. 2 Configures the REPLICATION listener to use TLS, and the CLIENT listener to use SASL over an unencrypted channel. The client could use an encrypted channel ( SASL_SSL ) in a production environment. 3 The ssl. properties define the TLS configuration. 4 Random number generator implementation. If not set, the Java platform SDK default is used. 5 Hostname verification. If set to an empty string, the hostname verification is turned off. If not set, the default value is HTTPS, which enforces hostname verification for server certificates. 6 Path to the keystore for the listener. 7 Path to the truststore for the listener. 8 Specifies that clients of the REPLICATION listener have to authenticate with a client certificate when establishing a TLS connection (used for inter-broker connectivity). 9 Configures the CLIENT listener for OAuth 2.0. Connectivity with the authorization server should use secure HTTPS connections. 4.10.2.3. Fast local JWT token validation configuration Fast local JWT token validation checks a JWT token signature locally. The local check ensures that a token: Conforms to type by containing a ( typ ) claim value of Bearer for an access token Is valid (not expired) Has an issuer that matches a validIssuerURI You specify a valid issuer URI when you configure the listener, so that any tokens not issued by the authorization server are rejected. The authorization server does not need to be contacted during fast local JWT token validation. You activate fast local JWT token validation by specifying a JWKs endpoint URI exposed by the OAuth 2.0 authorization server. The endpoint contains the public keys used to validate signed JWT tokens, which are sent as credentials by Kafka clients. Note All communication with the authorization server should be performed using HTTPS. For a TLS listener, you can configure a certificate truststore and point to the truststore file. Example properties for fast local JWT token validation listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.valid.issuer.uri="https:// AUTH-SERVER-ADDRESS " \ 1 oauth.jwks.endpoint.uri="https:// AUTH-SERVER-ADDRESS /jwks" \ 2 oauth.jwks.refresh.seconds=" 300 " \ 3 oauth.jwks.refresh.min.pause.seconds=" 1 " \ 4 oauth.jwks.expiry.seconds=" 360 " \ 5 oauth.username.claim="preferred_username" \ 6 oauth.ssl.truststore.location=" PATH-TO-TRUSTSTORE-P12-FILE " \ 7 oauth.ssl.truststore.password=" TRUSTSTORE-PASSWORD " \ 8 oauth.ssl.truststore.type="PKCS12" ; 9 1 A valid issuer URI. Only access tokens issued by this issuer will be accepted. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME . 2 The JWKS endpoint URL. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/certs . 3 The period between endpoint refreshes (default 300). 4 The minimum pause in seconds between consecutive attempts to refresh JWKS public keys. When an unknown signing key is encountered, the JWKS keys refresh is scheduled outside the regular periodic schedule with at least the specified pause since the last refresh attempt. The refreshing of keys follows the rule of exponential backoff, retrying on unsuccessful refreshes with ever increasing pause, until it reaches oauth.jwks.refresh.seconds . The default value is 1. 5 The duration the JWKs certificates are considered valid before they expire. Default is 360 seconds. If you specify a longer time, consider the risk of allowing access to revoked certificates. 6 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. 7 The location of the truststore used in the TLS configuration. 8 Password to access the truststore. 9 The truststore type in PKCS #12 format. 4.10.2.4. OAuth 2.0 introspection endpoint configuration Token validation using an OAuth 2.0 introspection endpoint treats a received access token as opaque. The Kafka broker sends an access token to the introspection endpoint, which responds with the token information necessary for validation. Importantly, it returns up-to-date information if the specific access token is valid, and also information about when the token expires. To configure OAuth 2.0 introspection-based validation, you specify an introspection endpoint URI rather than the JWKs endpoint URI specified for fast local JWT token validation. Depending on the authorization server, you typically have to specify a client ID and client secret , because the introspection endpoint is usually protected. Example properties for an introspection endpoint listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.introspection.endpoint.uri="https:// AUTH-SERVER-ADDRESS /introspection" \ 1 oauth.client.id="kafka-broker" \ 2 oauth.client.secret="kafka-broker-secret" \ 3 oauth.ssl.truststore.location=" PATH-TO-TRUSTSTORE-P12-FILE " \ 4 oauth.ssl.truststore.password=" TRUSTSTORE-PASSWORD " \ 5 oauth.ssl.truststore.type="PKCS12" \ 6 oauth.username.claim="preferred_username" ; 7 1 The OAuth 2.0 introspection endpoint URI. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token/introspect . 2 Client ID of the Kafka broker. 3 Secret for the Kafka broker. 4 The location of the truststore used in the TLS configuration. 5 Password to access the truststore. 6 The truststore type in PKCS #12 format. 7 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value of oauth.username.claim depends on the authorization server used. 4.10.3. Session re-authentication for Kafka brokers The Kafka SASL OAUTHBEARER mechanism, which is used for OAuth 2.0 authentication in AMQ Streams, supports a Kafka feature called the re-authentication mechanism. When the re-authentication mechanism is enabled through a listener configuration, the broker's authenticated session expires when the access token expires. The client must then re-authenticate to the existing session by sending a new, valid access token to the broker, without dropping the connection. If token validation is successful, a new client session is started using the existing connection. If the client fails to re-authenticate, the broker will close the connection if further attempts are made to send or receive messages. Java clients that use Kafka client library 2.2 or later automatically re-authenticate if the re-authentication mechanism is enabled on the broker. You enable session re-authentication for a Kafka broker in the Kafka server.properties file. Set the connections.max.reauth.ms property for a TLS listener with OAUTHBEARER enabled as the SASL mechanism. You can specify session re-authentication per listener. For example: Session re-authentication is supported for both types of token validation (fast local JWT and introspection endpoint). For an example configuration, see Section 4.10.6.2, "Configuring OAuth 2.0 support for Kafka brokers" . For more information about the re-authentication mechanism, which was added in Kafka version 2.2, see KIP-368 . Additional resources Section 4.10.2, "OAuth 2.0 Kafka broker configuration" Section 4.10.6.2, "Configuring OAuth 2.0 support for Kafka brokers" 4.10.4. OAuth 2.0 Kafka client configuration A Kafka client is configured with either: The Credentials required to obtain a valid access token from an authorization server (client ID and Secret) A valid long-lived access token or refresh token, obtained using tools provided by an authorization server Credentials are never sent to the Kafka broker. The only information ever sent to the Kafka broker is an access token. When a client obtains an access token, no further communication with the authorization server is needed. The simplest mechanism is authentication with a client ID and Secret. Using a long-lived access token, or a long-lived refresh token, adds more complexity because there is additional dependency on authorization server tools. Note If you are using long-lived access tokens, you may need to configure the client in the authorization server to increase the maximum lifetime of the token. If the Kafka client is not configured with an access token directly, the client exchanges credentials for an access token during Kafka session initiation by contacting the authorization server. The Kafka client exchanges either: Client ID and Secret Client ID, refresh token, and (optionally) a Secret 4.10.5. OAuth 2.0 client authentication flow In this section, we explain and visualize the communication flow between Kafka client, Kafka broker, and authorization server during Kafka session initiation. The flow depends on the client and server configuration. When a Kafka client sends an access token as credentials to a Kafka broker, the token needs to be validated. Depending on the authorization server used, and the configuration options available, you may prefer to use: Fast local token validation based on JWT signature checking and local token introspection, without contacting the authorization server An OAuth 2.0 introspection endpoint provided by the authorization server Using fast local token validation requires the authorization server to provide a JWKS endpoint with public certificates that are used to validate signatures on the tokens. Another option is to use an OAuth 2.0 introspection endpoint on the authorization server. Each time a new Kafka broker connection is established, the broker passes the access token received from the client to the authorization server, and checks the response to confirm whether or not the token is valid. Kafka client credentials can also be configured for: Direct local access using a previously generated long-lived access token Contact with the authorization server for a new access token to be issued Note An authorization server might only allow the use of opaque access tokens, which means that local token validation is not possible. 4.10.5.1. Example client authentication flows Here you can see the communication flows, for different configurations of Kafka clients and brokers, during Kafka session authentication. Client using client ID and secret, with broker delegating validation to authorization server Client using client ID and secret, with broker performing fast local token validation Client using long-lived access token, with broker delegating validation to authorization server Client using long-lived access token, with broker performing fast local validation Client using client ID and secret, with broker delegating validation to authorization server Kafka client requests access token from authorization server, using client ID and secret, and optionally a refresh token. Authorization server generates a new access token. Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. Kafka broker validates the access token by calling a token introspection endpoint on authorization server, using its own client ID and secret. Kafka client session is established if the token is valid. Client using client ID and secret, with broker performing fast local token validation Kafka client authenticates with authorization server from the token endpoint, using a client ID and secret, and optionally a refresh token. Authorization server generates a new access token. Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. Kafka broker validates the access token locally using a JWT token signature check, and local token introspection. Client using long-lived access token, with broker delegating validation to authorization server Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. Kafka broker validates the access token by calling a token introspection endpoint on authorization server, using its own client ID and secret. Kafka client session is established if the token is valid. Client using long-lived access token, with broker performing fast local validation Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. Kafka broker validates the access token locally using JWT token signature check, and local token introspection. Warning Fast local JWT token signature validation is suitable only for short-lived tokens as there is no check with the authorization server if a token has been revoked. Token expiration is written into the token, but revocation can happen at any time, so cannot be accounted for without contacting the authorization server. Any issued token would be considered valid until it expires. 4.10.6. Configuring OAuth 2.0 authentication OAuth 2.0 is used for interaction between Kafka clients and AMQ Streams components. In order to use OAuth 2.0 for AMQ Streams, you must: Configure an OAuth 2.0 authorization server for the AMQ Streams cluster and Kafka clients Deploy or update the Kafka cluster with Kafka broker listeners configured to use OAuth 2.0 Update your Java-based Kafka clients to use OAuth 2.0 4.10.6.1. Configuring Red Hat Single Sign-On as an OAuth 2.0 authorization server This procedure describes how to deploy Red Hat Single Sign-On as an authorization server and configure it for integration with AMQ Streams. The authorization server provides a central point for authentication and authorization, and management of users, clients, and permissions. Red Hat Single Sign-On has a concept of realms where a realm represents a separate set of users, clients, permissions, and other configuration. You can use a default master realm , or create a new one. Each realm exposes its own OAuth 2.0 endpoints, which means that application clients and application servers all need to use the same realm. To use OAuth 2.0 with AMQ Streams, you need a deployment of an authorization server to be able to create and manage authentication realms. Note If you already have Red Hat Single Sign-On deployed, you can skip the deployment step and use your current deployment. Before you begin You will need to be familiar with using Red Hat Single Sign-On. For installation and administration instructions, see: Server Installation and Configuration Guide Server Administration Guide Prerequisites AMQ Streams and Kafka are running For the Red Hat Single Sign-On deployment: Check the Red Hat Single Sign-On Supported Configurations Procedure Install Red Hat Single Sign-On. You can install from a ZIP file or by using an RPM. Log in to the Red Hat Single Sign-On Admin Console to create the OAuth 2.0 policies for AMQ Streams. Login details are provided when you deploy Red Hat Single Sign-On. Create and enable a realm. You can use an existing master realm. Adjust the session and token timeouts for the realm, if required. Create a client called kafka-broker . From the Settings tab, set: Access Type to Confidential Standard Flow Enabled to OFF to disable web login for this client Service Accounts Enabled to ON to allow this client to authenticate in its own name Click Save before continuing. From the Credentials tab, take a note of the secret for using in your AMQ Streams Kafka cluster configuration. Repeat the client creation steps for any application client that will connect to your Kafka brokers. Create a definition for each new client. You will use the names as client IDs in your configuration. What to do After deploying and configuring the authorization server, configure the Kafka brokers to use OAuth 2.0 . 4.10.6.2. Configuring OAuth 2.0 support for Kafka brokers This procedure describes how to configure Kafka brokers so that the broker listeners are enabled to use OAuth 2.0 authentication using an authorization server. We advise use of OAuth 2.0 over an encrypted interface through configuration of TLS listeners. Plain listeners are not recommended. Configure the Kafka brokers using properties that support your chosen authorization server, and the type of authorization you are implementing. Before you start For more information on the configuration and authentication of Kafka broker listeners, see: Listeners Encryption and authentication For a description of the properties used in the listener configuration, see: OAuth 2.0 Kafka broker configuration Prerequisites AMQ Streams and Kafka are running An OAuth 2.0 authorization server is deployed Procedure Configure the Kafka broker listener configuration in the server.properties file. For example: sasl.enabled.mechanisms=OAUTHBEARER listeners=CLIENT://0.0.0.0:9092 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER sasl.mechanism.inter.broker.protocol=OAUTHBEARER inter.broker.listener.name=CLIENT listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler Configure broker connection settings as part of the listener.name.client.oauthbearer.sasl.jaas.config . The examples here show connection configuration options. Example 1: Local token validation using a JWKS endpoint configuration listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.valid.issuer.uri=" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME " \ oauth.jwks.endpoint.uri=" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/certs " \ oauth.jwks.refresh.seconds=" 300 " \ oauth.jwks.refresh.min.pause.seconds=" 1 " \ oauth.jwks.expiry.seconds=" 360 " \ oauth.username.claim="preferred_username" \ oauth.ssl.truststore.location=" PATH-TO-TRUSTSTORE-P12-FILE " \ oauth.ssl.truststore.password=" TRUSTSTORE-PASSWORD " \ oauth.ssl.truststore.type="PKCS12" ; listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 Example 2: Delegating token validation to the authorization server through the OAuth 2.0 introspection endpoint listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.introspection.endpoint.uri=" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/introspection " \ # ... If required, configure access to the authorization server. This step is normally required for a production environment, unless a technology like service mesh is used to configure secure channels outside containers. Provide a custom truststore for connecting to a secured authorization server. SSL is always required for access to the authorization server. Set properties to configure the truststore. For example: listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.client.id="kafka-broker" \ oauth.client.secret="kafka-broker-secret" \ oauth.ssl.truststore.location=" PATH-TO-TRUSTSTORE-P12-FILE " \ oauth.ssl.truststore.password=" TRUSTSTORE-PASSWORD " \ oauth.ssl.truststore.type="PKCS12" ; If the certificate hostname does not match the access URL hostname, you can turn off certificate hostname validation: oauth.ssl.endpoint.identification.algorithm="" The check ensures that client connection to the authorization server is authentic. You may wish to turn off the validation in a non-production environment. Configure additional properties according to your chosen authentication flow. listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.token.endpoint.uri=" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token " \ 1 oauth.valid.issuer.uri="https:// https://AUTH-SERVER-ADDRESS/auth/REALM-NAME " \ 2 oauth.client.id="kafka-broker" \ 3 oauth.client.secret="kafka-broker-secret" \ 4 oauth.refresh.token=" REFRESH-TOKEN-FOR-KAFKA-BROKERS " \ 5 oauth.access.token=" ACCESS-TOKEN-FOR-KAFKA-BROKERS " ; 6 1 The OAuth 2.0 token endpoint URL to your authorization server. For production, always use HTTPs. Required when KeycloakRBACAuthorizer is used, or an OAuth 2.0 enabled listener is used for inter-broker communication. 2 A valid issuer URI. Only access tokens issued by this issuer will be accepted. (Always required.) 3 The configured client ID of the Kafka broker, which is the same for all brokers. This is the client registered with the authorization server as kafka-broker . Required when an introspection endpoint is used for token validation, or when KeycloakRBACAuthorizer is used. 4 The configured secret for the Kafka broker, which is the same for all brokers. When the broker must authenticate to the authorization server, either a client secret, access token or a refresh token has to be specified. 5 (Optional) A long-lived refresh token for Kafka brokers. 6 (Optional) A long-lived access token for Kafka brokers. Depending on how you apply OAuth 2.0 authentication, and the type of authorization server being used, add additional configuration settings: listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.check.issuer=false \ 1 oauth.fallback.username.claim=" CLIENT-ID " \ 2 oauth.fallback.username.prefix=" CLIENT-ACCOUNT " \ 3 oauth.valid.token.type="bearer" \ 4 oauth.userinfo.endpoint.uri=" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/userinfo " ; 5 1 If your authorization server does not provide an iss claim, it is not possible to perform an issuer check. In this situation, set oauth.check.issuer to false and do not specify a oauth.valid.issuer.uri . Default is true . 2 An authorization server may not provide a single attribute to identify both regular users and clients. When a client authenticates in its own name, the server might provide a client ID . When a user authenticates using a username and password, to obtain a refresh token or an access token, the server might provide a username attribute in addition to a client ID. Use this fallback option to specify the username claim (attribute) to use if a primary user ID attribute is not available. 3 In situations where oauth.fallback.username.claim is applicable, it may also be necessary to prevent name collisions between the values of the username claim, and those of the fallback username claim. Consider a situation where a client called producer exists, but also a regular user called producer exists. In order to differentiate between the two, you can use this property to add a prefix to the user ID of the client. 4 (Only applicable when using oauth.introspection.endpoint.uri ) Depending on the authorization server you are using, the introspection endpoint may or may not return the token type attribute, or it may contain different values. You can specify a valid token type value that the response from the introspection endpoint has to contain. 5 (Only applicable when using oauth.introspection.endpoint.uri ) The authorization server may be configured or implemented in such a way to not provide any identifiable information in an introspection endpoint response. In order to obtain the user ID, you can configure the URI of the userinfo endpoint as a fallback. The oauth.fallback.username.claim , oauth.fallback.username.claim , and oauth.fallback.username.prefix settings are applied to the response of the userinfo endpoint. What to do Configure your Kafka clients to use OAuth 2.0 4.10.6.3. Configuring Kafka Java clients to use OAuth 2.0 This procedure describes how to configure Kafka producer and consumer APIs to use OAuth 2.0 for interaction with Kafka brokers. Add a client callback plugin to your pom.xml file, and configure the system properties. Prerequisites AMQ Streams and Kafka are running An OAuth 2.0 authorization server is deployed and configured for OAuth access to Kafka brokers Kafka brokers are configured for OAuth 2.0 Procedure Add the client library with OAuth 2.0 support to the pom.xml file for the Kafka client: <dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.6.1.redhat-00003</version> </dependency> Configure the system properties for the callback: For example: System.setProperty(ClientConfig.OAUTH_TOKEN_ENDPOINT_URI, " https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token "); 1 System.setProperty(ClientConfig.OAUTH_CLIENT_ID, " CLIENT-NAME "); 2 System.setProperty(ClientConfig.OAUTH_CLIENT_SECRET, " CLIENT_SECRET "); 3 System.setProperty(ClientConfig.OAUTH_SCOPE, " SCOPE-VALUE ") 4 1 URI of the authorization server token endpoint. 2 Client ID, which is the name used when creating the client in the authorization server. 3 Client secret created when creating the client in the authorization server. 4 (Optional) The scope for requesting the token from the token endpoint. An authorization server may require a client to specify the scope. Enable the SASL OAUTHBEARER mechanism on a TLS encrypted connection in the Kafka client configuration: For example: props.put("sasl.jaas.config", "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;"); props.put("security.protocol", "SASL_SSL"); 1 props.put("sasl.mechanism", "OAUTHBEARER"); props.put("sasl.login.callback.handler.class", "io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler"); 1 Here we use SASL_SSL for use over TLS connections. Use SASL_PLAINTEXT over unencrypted connections. Verify that the Kafka client can access the Kafka brokers. 4.11. Using OAuth 2.0 token-based authorization If you are using OAuth 2.0 with Red Hat Single Sign-On for token-based authentication, you can also use Red Hat Single Sign-On to configure authorization rules to constrain client access to Kafka brokers. Authentication establishes the identity of a user. Authorization decides the level of access for that user. AMQ Streams supports the use of OAuth 2.0 token-based authorization through Red Hat Single Sign-On Authorization Services , which allows you to manage security policies and permissions centrally. Security policies and permissions defined in Red Hat Single Sign-On are used to grant access to resources on Kafka brokers. Users and clients are matched against policies that permit access to perform specific actions on Kafka brokers. Kafka allows all users full access to brokers by default, and also provides the AclAuthorizer plugin to configure authorization based on Access Control Lists (ACLs). ZooKeeper stores ACL rules that grant or deny access to resources based on username . However, OAuth 2.0 token-based authorization with Red Hat Single Sign-On offers far greater flexibility on how you wish to implement access control to Kafka brokers. In addition, you can configure your Kafka brokers to use OAuth 2.0 authorization and ACLs. Additional resources Using OAuth 2.0 token-based authentication Kafka Authorization Red Hat Single Sign-On documentation 4.11.1. OAuth 2.0 authorization mechanism OAuth 2.0 authorization in AMQ Streams uses Red Hat Single Sign-On server Authorization Services REST endpoints to extend token-based authentication with Red Hat Single Sign-On by applying defined security policies on a particular user, and providing a list of permissions granted on different resources for that user. Policies use roles and groups to match permissions to users. OAuth 2.0 authorization enforces permissions locally based on the received list of grants for the user from Red Hat Single Sign-On Authorization Services. 4.11.1.1. Kafka broker custom authorizer A Red Hat Single Sign-On authorizer ( KeycloakRBACAuthorizer ) is provided with AMQ Streams. To be able to use the Red Hat Single Sign-On REST endpoints for Authorization Services provided by Red Hat Single Sign-On, you configure a custom authorizer on the Kafka broker. The authorizer fetches a list of granted permissions from the authorization server as needed, and enforces authorization locally on the Kafka Broker, making rapid authorization decisions for each client request. 4.11.2. Configuring OAuth 2.0 authorization support This procedure describes how to configure Kafka brokers to use OAuth 2.0 authorization using Red Hat Single Sign-On Authorization Services. Before you begin Consider the access you require or want to limit for certain users. You can use a combination of Red Hat Single Sign-On groups , roles , clients , and users to configure access in Red Hat Single Sign-On. Typically, groups are used to match users based on organizational departments or geographical locations. And roles are used to match users based on their function. With Red Hat Single Sign-On, you can store users and groups in LDAP, whereas clients and roles cannot be stored this way. Storage and access to user data may be a factor in how you choose to configure authorization policies. Note Super users always have unconstrained access to a Kafka broker regardless of the authorization implemented on the Kafka broker. Prerequisites AMQ Streams must be configured to use OAuth 2.0 with Red Hat Single Sign-On for token-based authentication . You use the same Red Hat Single Sign-On server endpoint when you set up authorization. You need to understand how to manage policies and permissions for Red Hat Single Sign-On Authorization Services, as described in the Red Hat Single Sign-On documentation . Procedure Access the Red Hat Single Sign-On Admin Console or use the Red Hat Single Sign-On Admin CLI to enable Authorization Services for the Kafka broker client you created when setting up OAuth 2.0 authentication. Use Authorization Services to define resources, authorization scopes, policies, and permissions for the client. Bind the permissions to users and clients by assigning them roles and groups. Configure the Kafka brokers to use Red Hat Single Sign-On authorization. Add the following to the Kafka server.properties configuration file to install the authorizer in Kafka: authorizer.class.name=io.strimzi.kafka.oauth.server.authorizer.KeycloakRBACAuthorizer principal.builder.class=io.strimzi.kafka.oauth.server.authorizer.JwtKafkaPrincipalBuilder Add configuration for the Kafka brokers to access the authorization server and Authorization Services. Here we show example configuration added as additional properties to server.properties , but you can also define them as environment variables using capitalized or upper-case naming conventions. strimzi.authorization.token.endpoint.uri=" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token " 1 strimzi.authorization.client.id="kafka" 2 1 The OAuth 2.0 token endpoint URL to Red Hat Single Sign-On. For production, always use HTTPs. 2 The client ID of the OAuth 2.0 client definition in Red Hat Single Sign-On that has Authorization Services enabled. Typically, kafka is used as the ID. (Optional) Add configuration for specific Kafka clusters. For example: strimzi.authorization.kafka.cluster.name="kafka-cluster" 1 1 The name of a specific Kafka cluster. Names are used to target permissions, making it possible to manage multiple clusters within the same Red Hat Single Sign-On realm. The default value is kafka-cluster . (Optional) Delegate to simple authorization. For example: strimzi.authorization.delegate.to.kafka.acl="false" 1 1 Delegate authorization to Kafka AclAuthorizer if access is denied by Red Hat Single Sign-On Authorization Services policies. The default is false . (Optional) Add configuration for TLS connection to the authorization server. For example: strimzi.authorization.ssl.truststore.location= <path-to-truststore> 1 strimzi.authorization.ssl.truststore.password= <my-truststore-password> 2 strimzi.authorization.ssl.truststore.type=JKS 3 strimzi.authorization.ssl.secure.random.implementation=SHA1PRNG 4 strimzi.authorization.ssl.endpoint.identification.algorithm=HTTPS 5 1 The path to the truststore that contain the certificates. 2 The password for the truststore. 3 The truststore type. If not set, the default Java keystore type is used. 4 Random number generator implementation. If not set, the Java platform SDK default is used. 5 Hostname verification. If set to an empty string, the hostname verification is turned off. If not set, the default value is HTTPS , which enforces hostname verification for server certificates. (Optional) Configure the refresh of grants from the authorization server. The grants refresh job works by enumerating the active tokens and requesting the latest grants for each. For example: strimzi.authorization.grants.refresh.period.seconds="120" 1 strimzi.authorization.grants.refresh.pool.size="10" 2 1 Specifies how often the list of grants from the authorization server is refreshed (once per minute by default). To turn grants refresh off for debugging purposes, set to "0" . 2 Specifies the size of the thread pool (the degree of parallelism) used by the grants refresh job. The default value is "5" . Verify the configured permissions by accessing Kafka brokers as clients or users with specific roles, making sure they have the necessary access, or do not have the access they are not supposed to have. 4.12. Using OPA policy-based authorization Open Policy Agent (OPA) is an open-source policy engine. You can integrate OPA with AMQ Streams to act as a policy-based authorization mechanism for permitting client operations on Kafka brokers. When a request is made from a client, OPA will evaluate the request against policies defined for Kafka access, then allow or deny the request. Note Red Hat does not support the OPA server. Additional resources Open Policy Agent website 4.12.1. Defining OPA policies Before integrating OPA with AMQ Streams, consider how you will define policies to provide fine-grained access controls. You can define access control for Kafka clusters, consumer groups and topics. For instance, you can define an authorization policy that allows write access from a producer client to a specific broker topic. For this, the policy might specify the: User principal and host address associated with the producer client Operations allowed for the client Resource type ( topic ) and resource name the policy applies to Allow and deny decisions are written into the policy, and a response is provided based on the request and client identification data provided. In our example the producer client would have to satisfy the policy to be allowed to write to the topic. 4.12.2. Connecting to the OPA To enable Kafka to access the OPA policy engine to query access control policies, , you configure a custom OPA authorizer plugin ( kafka-authorizer-opa- VERSION .jar ) in your Kafka server.properties file. When a request is made by a client, the OPA policy engine is queried by the plugin using a specified URL address and a REST endpoint, which must be the name of the defined policy. The plugin provides the details of the client request - user principal, operation, and resource - in JSON format to be checked against the policy. The details will include the unique identity of the client; for example, taking the distinguished name from the client certificate if TLS authentication is used. OPA uses the data to provide a response - either true or false - to the plugin to allow or deny the request. 4.12.3. Configuring OPA authorization support This procedure describes how to configure Kafka brokers to use OPA authorization. Before you begin Consider the access you require or want to limit for certain users. You can use a combination of users and Kafka resources to define OPA policies. It is possible to set up OPA to load user information from an LDAP data source. Note Super users always have unconstrained access to a Kafka broker regardless of the authorization implemented on the Kafka broker. Prerequisites An OPA server must be available for connection. OPA authorizer plugin for Kafka Procedure Write the OPA policies required for authorizing client requests to perform operations on the Kafka brokers. See Defining OPA policies . Now configure the Kafka brokers to use OPA. Install the OPA authorizer plugin for Kafka . See Connecting to the OPA . Make sure that the plugin files are included in the Kafka classpath. Add the following to the Kafka server.properties configuration file to enable the OPA plugin: authorizer.class.name: com.bisnode.kafka.authorization.OpaAuthorizer Add further configuration to server.properties for the Kafka brokers to access the OPA policy engine and policies. For example: opa.authorizer.url=https:// OPA-ADDRESS /allow 1 opa.authorizer.allow.on.error=false 2 opa.authorizer.cache.initial.capacity=50000 3 opa.authorizer.cache.maximum.size=50000 4 opa.authorizer.cache.expire.after.seconds=600000 5 super.users=User:alice;User:bob 6 1 (Required) The OAuth 2.0 token endpoint URL for the policy the authorizer plugin will query. In this example, the policy is called allow . 2 Flag to specify whether a client is allowed or denied access by default if the authorizer plugin fails to connect with the OPA policy engine. 3 Initial capacity in bytes of the local cache. The cache is used so that the plugin does not have to query the OPA policy engine for every request. 4 Maximum capacity in bytes of the local cache. 5 Time in milliseconds that the local cache is refreshed by reloading from the OPA policy engine. 6 A list of user principals treated as super users, so that they are always allowed without querying the Open Policy Agent policy. Refer to the Open Policy Agent website for information on authentication and authorization options. Verify the configured permissions by accessing Kafka brokers using clients that have and do not have the correct authorization. 4.13. Logging Kafka brokers use Log4j as their logging infrastructure. By default, the logging configuration is read from the log4j.properties configuration file, which should be placed either in the /opt/kafka/config/ directory or on the classpath. The location and name of the configuration file can be changed using the Java property log4j.configuration , which can be passed to Kafka by using the KAFKA_LOG4J_OPTS environment variable: For more information about Log4j configurations, see the Log4j manual . 4.13.1. Dynamically change logging levels for Kafka broker loggers Kafka broker logging is provided by multiple broker loggers in each broker. You can dynamically change the logging level for broker loggers without having to restart the broker. Increasing the level of detail returned in logs- by changing from INFO to DEBUG , for example- is useful for investigating performance issues in a Kafka cluster. Broker loggers can also be dynamically reset to their default logging levels. Prerequisites AMQ Streams is installed on the host ZooKeeper and Kafka are running Procedure Switch to the kafka user: su - kafka List all the broker loggers for a broker by using the kafka-configs.sh tool: /opt/kafka/bin/kafka-configs.sh --bootstrap-server BOOTSTRAP-ADDRESS --describe --entity-type broker-loggers --entity-name BROKER-ID For example, for broker 0 : /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type broker-loggers --entity-name 0 This returns the logging level for each logger: TRACE , DEBUG , INFO , WARN , ERROR , or FATAL . For example: #... kafka.controller.ControllerChannelManager=INFO sensitive=false synonyms={} kafka.log.TimeIndex=INFO sensitive=false synonyms={} Change the logging level for one or more broker loggers. Use the --alter and --add-config options and specify each logger and its level as a comma-separated list in double quotes. /opt/kafka/bin/kafka-configs.sh --bootstrap-server BOOTSTRAP-ADDRESS --alter --add-config " LOGGER-ONE=NEW-LEVEL , LOGGER-TWO=NEW-LEVEL " --entity-type broker-loggers --entity-name BROKER-ID For example, for broker 0 : /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config "kafka.controller.ControllerChannelManager=WARN,kafka.log.TimeIndex=WARN" --entity-type broker-loggers --entity-name 0 If successful this returns: Completed updating config for broker: 0. Resetting a broker logger You can reset one or more broker loggers to their default logging levels by using the kafka-configs.sh tool. Use the --alter and --delete-config options and specify each broker logger as a comma-separated list in double quotes: /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config " LOGGER-ONE , LOGGER-TWO " --entity-type broker-loggers --entity-name BROKER-ID Additional resources Updating Broker Configs in the Apache Kafka documentation.
[ "zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181", "zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181/my-cluster-1", "listeners=INT1://:9092,INT2://:9093,REPLICATION://:9094", "listeners=INT1://:9092,INT2://:9093 advertised.listeners=INT1://my-broker-1.my-domain.com:1234,INT2://my-broker-1.my-domain.com:1234:9093", "inter.broker.listener.name=REPLICATION", "log.dirs=/var/lib/kafka", "log.dirs=/var/lib/kafka1,/var/lib/kafka2,/var/lib/kafka3", "broker.id=1", "broker.id=0 zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=REPLICATION://:9091,PLAINTEXT://:9092 inter.broker.listener.name=REPLICATION log.dirs=/var/lib/kafka", "su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties", "jcmd | grep Kafka", "echo dump | ncat zoo1.my-domain.com 2181", "SessionTracker dump: org.apache.zookeeper.server.quorum.LearnerSessionTracker@28848ab9 ephemeral nodes dump: Sessions with Ephemerals (3): 0x20000015dd00000: /brokers/ids/1 0x10000015dc70000: /controller /brokers/ids/0 0x10000015dc70001: /brokers/ids/2", "Client { org.apache.kafka.common.security.plain.PlainLoginModule required username=\"kafka\" password=\"123456\"; };", "Client { org.apache.kafka.common.security.plain.PlainLoginModule required username=\" <Username> \" password=\" <Password> \"; };", "Client { org.apache.kafka.common.security.plain.PlainLoginModule required username=\"kafka\" password=\"123456\"; };", "su - kafka export KAFKA_OPTS=\"-Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties", "authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer", "super.users=User:admin,User:operator", "authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer", "bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --add --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --add --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --add --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2", "bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --add --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1", "bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --add --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1", "bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --list --topic myTopic Current ACLs for resource `Topic:myTopic`: User:user1 has Allow permission for operations: Read from hosts: * User:user2 has Allow permission for operations: Read from hosts: * User:user2 has Deny permission for operations: Read from hosts: 127.0.0.1 User:user1 has Allow permission for operations: Describe from hosts: * User:user2 has Allow permission for operations: Describe from hosts: * User:user2 has Deny permission for operations: Describe from hosts: 127.0.0.1", "bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --remove --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --remove --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --remove --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2", "bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --remove --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1", "bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zoo1.my-domain.com:2181 --remove --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1", "zookeeper.set.acl=true", "zookeeper.set.acl=true", "zookeeper.set.acl=true", "su - kafka cd /opt/kafka KAFKA_OPTS=\"-Djava.security.auth.login.config=./config/jaas.conf\"; ./bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect= <ZooKeeperURL> exit", "su - kafka cd /opt/kafka KAFKA_OPTS=\"-Djava.security.auth.login.config=./config/jaas.conf\"; ./bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect=zoo1.my-domain.com:2181 exit", "listeners=INT1://:9092,INT2://:9093,REPLICATION://:9094", "listener.security.protocol.map=INT1:SASL_PLAINTEXT,INT2:SASL_SSL,REPLICATION:SSL", "listener.security.protocol.map=INT1:SSL,INT2:SSL,REPLICATION:SSL", "ssl.keystore.location=/path/to/keystore/server-1.jks ssl.keystore.password=123456", "listeners=INT1://:9092,INT2://:9093,REPLICATION://:9094 listener.security.protocol.map=INT1:SSL,INT2:SSL,REPLICATION:SSL Default configuration - will be used for listeners INT1 and INT2 ssl.keystore.location=/path/to/keystore/server-1.jks ssl.keystore.password=123456 Different configuration for listener REPLICATION listener.name.replication.ssl.keystore.location=/path/to/keystore/server-1.jks listener.name.replication.ssl.keystore.password=123456", "listeners=UNENCRYPTED://:9092,ENCRYPTED://:9093,REPLICATION://:9094 listener.security.protocol.map=UNENCRYPTED:PLAINTEXT,ENCRYPTED:SSL,REPLICATION:PLAINTEXT ssl.keystore.location=/path/to/keystore/server-1.jks ssl.keystore.password=123456", "ssl.truststore.location=/path/to/keystore/server-1.jks ssl.truststore.password=123456", "KAFKA_OPTS=\"-Djava.security.auth.login.config=/path/to/my/jaas.config\"; bin/kafka-server-start.sh", "sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512", "sasl.mechanism.inter.broker.protocol=PLAIN", "KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required user_admin=\"123456\" user_user1=\"123456\" user_user2=\"123456\"; };", "KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required username=\"admin\" password=\"123456\" user_admin=\"123456\" user_user1=\"123456\" user_user2=\"123456\"; };", "KafkaServer { org.apache.kafka.common.security.scram.ScramLoginModule required; };", "sasl.enabled.mechanisms=SCRAM-SHA-256,SCRAM-SHA-512 sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512", "bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-256=[password=123456],SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name user1", "bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name user1", "KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/etc/security/keytabs/kafka_server.keytab\" principal=\"kafka/[email protected]\"; };", "sasl.enabled.mechanisms=GSSAPI sasl.mechanism.inter.broker.protocol=GSSAPI sasl.kerberos.service.name=kafka", "KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required user_admin=\"123456\" user_user1=\"123456\" user_user2=\"123456\"; com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/etc/security/keytabs/kafka_server.keytab\" principal=\"kafka/[email protected]\"; org.apache.kafka.common.security.scram.ScramLoginModule required; };", "ssl.truststore.location=/path/to/truststore.jks ssl.truststore.password=123456 ssl.client.auth=required", "KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required user_admin=\"123456\" user_user1=\"123456\" user_user2=\"123456\"; };", "listeners=INSECURE://:9092,AUTHENTICATED://:9093,REPLICATION://:9094 listener.security.protocol.map=INSECURE:PLAINTEXT,AUTHENTICATED:SASL_PLAINTEXT,REPLICATION:PLAINTEXT sasl.enabled.mechanisms=PLAIN", "su - kafka export KAFKA_OPTS=\"-Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties", "KafkaServer { org.apache.kafka.common.security.scram.ScramLoginModule required; };", "listeners=INSECURE://:9092,AUTHENTICATED://:9093,REPLICATION://:9094 listener.security.protocol.map=INSECURE:PLAINTEXT,AUTHENTICATED:SASL_PLAINTEXT,REPLICATION:PLAINTEXT sasl.enabled.mechanisms=SCRAM-SHA-512", "su - kafka export KAFKA_OPTS=\"-Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties", "bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --alter --add-config 'SCRAM-SHA-512=[password= <Password> ]' --entity-type users --entity-name <Username>", "bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name user1", "bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name <Username>", "bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name user1", "sasl.enabled.mechanisms=OAUTHBEARER 1 listeners=CLIENT://0.0.0.0:9092 2 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT 3 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER 4 sasl.mechanism.inter.broker.protocol=OAUTHBEARER 5 inter.broker.listener.name=CLIENT 6 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler 7 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \\ 8 oauth.valid.issuer.uri=\"https:// AUTH-SERVER-ADDRESS \" \\ 9 oauth.jwks.endpoint.uri=\"https:// AUTH-SERVER-ADDRESS /jwks\" \\ 10 oauth.username.claim=\"preferred_username\" \\ 11 oauth.client.id=\"kafka-broker\" \\ 12 oauth.client.secret=\"kafka-secret\" \\ 13 oauth.token.endpoint.uri=\"https:// AUTH-SERVER-ADDRESS /token\" ; 14 listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 15 listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 16", "sasl.enabled.mechanisms= listeners=REPLICATION://kafka:9091,CLIENT://kafka:9092 1 listener.security.protocol.map=REPLICATION:SSL,CLIENT:SASL_PLAINTEXT 2 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER inter.broker.listener.name=REPLICATION listener.name.replication.ssl.keystore.password= KEYSTORE-PASSWORD 3 listener.name.replication.ssl.truststore.password= TRUSTSTORE-PASSWORD listener.name.replication.ssl.keystore.type=JKS listener.name.replication.ssl.truststore.type=JKS listener.name.replication.ssl.endpoint.identification.algorithm=HTTPS 4 listener.name.replication.ssl.secure.random.implementation=SHA1PRNG 5 listener.name.replication.ssl.keystore.location= PATH-TO-KEYSTORE 6 listener.name.replication.ssl.truststore.location= PATH-TO-TRUSTSTORE 7 listener.name.replication.ssl.client.auth=required 8 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.valid.issuer.uri=\"https:// AUTH-SERVER-ADDRESS \" oauth.jwks.endpoint.uri=\"https:// AUTH-SERVER-ADDRESS /jwks\" oauth.username.claim=\"preferred_username\" ; 9", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.valid.issuer.uri=\"https:// AUTH-SERVER-ADDRESS \" \\ 1 oauth.jwks.endpoint.uri=\"https:// AUTH-SERVER-ADDRESS /jwks\" \\ 2 oauth.jwks.refresh.seconds=\" 300 \" \\ 3 oauth.jwks.refresh.min.pause.seconds=\" 1 \" \\ 4 oauth.jwks.expiry.seconds=\" 360 \" \\ 5 oauth.username.claim=\"preferred_username\" \\ 6 oauth.ssl.truststore.location=\" PATH-TO-TRUSTSTORE-P12-FILE \" \\ 7 oauth.ssl.truststore.password=\" TRUSTSTORE-PASSWORD \" \\ 8 oauth.ssl.truststore.type=\"PKCS12\" ; 9", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.introspection.endpoint.uri=\"https:// AUTH-SERVER-ADDRESS /introspection\" \\ 1 oauth.client.id=\"kafka-broker\" \\ 2 oauth.client.secret=\"kafka-broker-secret\" \\ 3 oauth.ssl.truststore.location=\" PATH-TO-TRUSTSTORE-P12-FILE \" \\ 4 oauth.ssl.truststore.password=\" TRUSTSTORE-PASSWORD \" \\ 5 oauth.ssl.truststore.type=\"PKCS12\" \\ 6 oauth.username.claim=\"preferred_username\" ; 7", "listener.name.client.oauthbearer.connections.max.reauth.ms=3600000", "sasl.enabled.mechanisms=OAUTHBEARER listeners=CLIENT://0.0.0.0:9092 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER sasl.mechanism.inter.broker.protocol=OAUTHBEARER inter.broker.listener.name=CLIENT listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.valid.issuer.uri=\" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME \" oauth.jwks.endpoint.uri=\" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/certs \" oauth.jwks.refresh.seconds=\" 300 \" oauth.jwks.refresh.min.pause.seconds=\" 1 \" oauth.jwks.expiry.seconds=\" 360 \" oauth.username.claim=\"preferred_username\" oauth.ssl.truststore.location=\" PATH-TO-TRUSTSTORE-P12-FILE \" oauth.ssl.truststore.password=\" TRUSTSTORE-PASSWORD \" oauth.ssl.truststore.type=\"PKCS12\" ; listener.name.client.oauthbearer.connections.max.reauth.ms=3600000", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.introspection.endpoint.uri=\" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/introspection \" #", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required # oauth.client.id=\"kafka-broker\" oauth.client.secret=\"kafka-broker-secret\" oauth.ssl.truststore.location=\" PATH-TO-TRUSTSTORE-P12-FILE \" oauth.ssl.truststore.password=\" TRUSTSTORE-PASSWORD \" oauth.ssl.truststore.type=\"PKCS12\" ;", "oauth.ssl.endpoint.identification.algorithm=\"\"", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required # oauth.token.endpoint.uri=\" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token \" \\ 1 oauth.valid.issuer.uri=\"https:// https://AUTH-SERVER-ADDRESS/auth/REALM-NAME \" \\ 2 oauth.client.id=\"kafka-broker\" \\ 3 oauth.client.secret=\"kafka-broker-secret\" \\ 4 oauth.refresh.token=\" REFRESH-TOKEN-FOR-KAFKA-BROKERS \" \\ 5 oauth.access.token=\" ACCESS-TOKEN-FOR-KAFKA-BROKERS \" ; 6", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required # oauth.check.issuer=false \\ 1 oauth.fallback.username.claim=\" CLIENT-ID \" \\ 2 oauth.fallback.username.prefix=\" CLIENT-ACCOUNT \" \\ 3 oauth.valid.token.type=\"bearer\" \\ 4 oauth.userinfo.endpoint.uri=\" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/userinfo \" ; 5", "<dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.6.1.redhat-00003</version> </dependency>", "System.setProperty(ClientConfig.OAUTH_TOKEN_ENDPOINT_URI, \" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token \"); 1 System.setProperty(ClientConfig.OAUTH_CLIENT_ID, \" CLIENT-NAME \"); 2 System.setProperty(ClientConfig.OAUTH_CLIENT_SECRET, \" CLIENT_SECRET \"); 3 System.setProperty(ClientConfig.OAUTH_SCOPE, \" SCOPE-VALUE \") 4", "props.put(\"sasl.jaas.config\", \"org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;\"); props.put(\"security.protocol\", \"SASL_SSL\"); 1 props.put(\"sasl.mechanism\", \"OAUTHBEARER\"); props.put(\"sasl.login.callback.handler.class\", \"io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler\");", "authorizer.class.name=io.strimzi.kafka.oauth.server.authorizer.KeycloakRBACAuthorizer principal.builder.class=io.strimzi.kafka.oauth.server.authorizer.JwtKafkaPrincipalBuilder", "strimzi.authorization.token.endpoint.uri=\" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token \" 1 strimzi.authorization.client.id=\"kafka\" 2", "strimzi.authorization.kafka.cluster.name=\"kafka-cluster\" 1", "strimzi.authorization.delegate.to.kafka.acl=\"false\" 1", "strimzi.authorization.ssl.truststore.location= <path-to-truststore> 1 strimzi.authorization.ssl.truststore.password= <my-truststore-password> 2 strimzi.authorization.ssl.truststore.type=JKS 3 strimzi.authorization.ssl.secure.random.implementation=SHA1PRNG 4 strimzi.authorization.ssl.endpoint.identification.algorithm=HTTPS 5", "strimzi.authorization.grants.refresh.period.seconds=\"120\" 1 strimzi.authorization.grants.refresh.pool.size=\"10\" 2", "authorizer.class.name: com.bisnode.kafka.authorization.OpaAuthorizer", "opa.authorizer.url=https:// OPA-ADDRESS /allow 1 opa.authorizer.allow.on.error=false 2 opa.authorizer.cache.initial.capacity=50000 3 opa.authorizer.cache.maximum.size=50000 4 opa.authorizer.cache.expire.after.seconds=600000 5 super.users=User:alice;User:bob 6", "su - kafka export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:/my/path/to/log4j.config\"; /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties", "su - kafka", "/opt/kafka/bin/kafka-configs.sh --bootstrap-server BOOTSTRAP-ADDRESS --describe --entity-type broker-loggers --entity-name BROKER-ID", "/opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type broker-loggers --entity-name 0", "# kafka.controller.ControllerChannelManager=INFO sensitive=false synonyms={} kafka.log.TimeIndex=INFO sensitive=false synonyms={}", "/opt/kafka/bin/kafka-configs.sh --bootstrap-server BOOTSTRAP-ADDRESS --alter --add-config \" LOGGER-ONE=NEW-LEVEL , LOGGER-TWO=NEW-LEVEL \" --entity-type broker-loggers --entity-name BROKER-ID", "/opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config \"kafka.controller.ControllerChannelManager=WARN,kafka.log.TimeIndex=WARN\" --entity-type broker-loggers --entity-name 0", "Completed updating config for broker: 0.", "/opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config \" LOGGER-ONE , LOGGER-TWO \" --entity-type broker-loggers --entity-name BROKER-ID" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_rhel/configuring_kafka
Chapter 3. CustomResourceDefinition [apiextensions.k8s.io/v1]
Chapter 3. CustomResourceDefinition [apiextensions.k8s.io/v1] Description CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CustomResourceDefinitionSpec describes how a user wants their resource to appear status object CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition 3.1.1. .spec Description CustomResourceDefinitionSpec describes how a user wants their resource to appear Type object Required group names scope versions Property Type Description conversion object CustomResourceConversion describes how to convert different versions of a CR. group string group is the API group of the defined custom resource. The custom resources are served under /apis/<group>/... . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). names object CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition preserveUnknownFields boolean preserveUnknownFields indicates that object fields which are not specified in the OpenAPI schema should be preserved when persisting to storage. apiVersion, kind, metadata and known fields inside metadata are always preserved. This field is deprecated in favor of setting x-preserve-unknown-fields to true in spec.versions[*].schema.openAPIV3Schema . See https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#field-pruning for details. scope string scope indicates whether the defined custom resource is cluster- or namespace-scoped. Allowed values are Cluster and Namespaced . versions array versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. versions[] object CustomResourceDefinitionVersion describes a version for CRD. 3.1.2. .spec.conversion Description CustomResourceConversion describes how to convert different versions of a CR. Type object Required strategy Property Type Description strategy string strategy specifies how custom resources are converted between versions. Allowed values are: - "None" : The converter only change the apiVersion and would not touch any other field in the custom resource. - "Webhook" : API Server will call to an external webhook to do the conversion. Additional information is needed for this option. This requires spec.preserveUnknownFields to be false, and spec.conversion.webhook to be set. webhook object WebhookConversion describes how to call a conversion webhook 3.1.3. .spec.conversion.webhook Description WebhookConversion describes how to call a conversion webhook Type object Required conversionReviewVersions Property Type Description clientConfig object WebhookClientConfig contains the information to make a TLS connection with the webhook. conversionReviewVersions array (string) conversionReviewVersions is an ordered list of preferred ConversionReview versions the Webhook expects. The API server will use the first version in the list which it supports. If none of the versions specified in this list are supported by API server, conversion will fail for the custom resource. If a persisted Webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail. 3.1.4. .spec.conversion.webhook.clientConfig Description WebhookClientConfig contains the information to make a TLS connection with the webhook. Type object Property Type Description caBundle string caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. service object ServiceReference holds a reference to Service.legacy.k8s.io url string url gives the location of the webhook, in standard URL form ( scheme://host:port/path ). Exactly one of url or service must be specified. The host should not refer to a service running in the cluster; use the service field instead. The host might be resolved via external DNS in some apiservers (e.g., kube-apiserver cannot resolve in-cluster DNS as that would be a layering violation). host may also be an IP address. Please note that using localhost or 127.0.0.1 as a host is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster. The scheme must be "https"; the URL must begin with "https://". A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier. Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#... ") and query parameters ("?... ") are not allowed, either. 3.1.5. .spec.conversion.webhook.clientConfig.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Required namespace name Property Type Description name string name is the name of the service. Required namespace string namespace is the namespace of the service. Required path string path is an optional URL path at which the webhook will be contacted. port integer port is an optional service port at which the webhook will be contacted. port should be a valid port number (1-65535, inclusive). Defaults to 443 for backward compatibility. 3.1.6. .spec.names Description CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition Type object Required plural kind Property Type Description categories array (string) categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like kubectl get all . kind string kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the kind attribute in API calls. listKind string listKind is the serialized kind of the list for this resource. Defaults to "`kind`List". plural string plural is the plural name of the resource to serve. The custom resources are served under /apis/<group>/<version>/... /<plural> . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). Must be all lowercase. shortNames array (string) shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like kubectl get <shortname> . It must be all lowercase. singular string singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased kind . 3.1.7. .spec.versions Description versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. Type array 3.1.8. .spec.versions[] Description CustomResourceDefinitionVersion describes a version for CRD. Type object Required name served storage Property Type Description additionalPrinterColumns array additionalPrinterColumns specifies additional columns returned in Table output. See https://kubernetes.io/docs/reference/using-api/api-concepts/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used. additionalPrinterColumns[] object CustomResourceColumnDefinition specifies a column for server side printing. deprecated boolean deprecated indicates this version of the custom resource API is deprecated. When set to true, API requests to this version receive a warning header in the server response. Defaults to false. deprecationWarning string deprecationWarning overrides the default warning returned to API clients. May only be set when deprecated is true. The default warning indicates this version is deprecated and recommends use of the newest served version of equal or greater stability, if one exists. name string name is the version name, e.g. "v1", "v2beta1", etc. The custom resources are served under this version at /apis/<group>/<version>/... if served is true. schema object CustomResourceValidation is a list of validation methods for CustomResources. served boolean served is a flag enabling/disabling this version from being served via REST APIs storage boolean storage indicates this version should be used when persisting custom resources to storage. There must be exactly one version with storage=true. subresources object CustomResourceSubresources defines the status and scale subresources for CustomResources. 3.1.9. .spec.versions[].additionalPrinterColumns Description additionalPrinterColumns specifies additional columns returned in Table output. See https://kubernetes.io/docs/reference/using-api/api-concepts/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used. Type array 3.1.10. .spec.versions[].additionalPrinterColumns[] Description CustomResourceColumnDefinition specifies a column for server side printing. Type object Required name type jsonPath Property Type Description description string description is a human readable description of this column. format string format is an optional OpenAPI type definition for this column. The 'name' format is applied to the primary identifier column to assist in clients identifying column is the resource name. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details. jsonPath string jsonPath is a simple JSON path (i.e. with array notation) which is evaluated against each custom resource to produce the value for this column. name string name is a human readable name for the column. priority integer priority is an integer defining the relative importance of this column compared to others. Lower numbers are considered higher priority. Columns that may be omitted in limited space scenarios should be given a priority greater than 0. type string type is an OpenAPI type definition for this column. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details. 3.1.11. .spec.versions[].schema Description CustomResourceValidation is a list of validation methods for CustomResources. Type object Property Type Description openAPIV3Schema `` openAPIV3Schema is the OpenAPI v3 schema to use for validation and pruning. 3.1.12. .spec.versions[].subresources Description CustomResourceSubresources defines the status and scale subresources for CustomResources. Type object Property Type Description scale object CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources. status object CustomResourceSubresourceStatus defines how to serve the status subresource for CustomResources. Status is represented by the .status JSON path inside of a CustomResource. When set, * exposes a /status subresource for the custom resource * PUT requests to the /status subresource take a custom resource object, and ignore changes to anything except the status stanza * PUT/POST/PATCH requests to the custom resource ignore changes to the status stanza 3.1.13. .spec.versions[].subresources.scale Description CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources. Type object Required specReplicasPath statusReplicasPath Property Type Description labelSelectorPath string labelSelectorPath defines the JSON path inside of a custom resource that corresponds to Scale status.selector . Only JSON paths without the array notation are allowed. Must be a JSON Path under .status or .spec . Must be set to work with HorizontalPodAutoscaler. The field pointed by this JSON path must be a string field (not a complex selector struct) which contains a serialized label selector in string form. More info: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions#scale-subresource If there is no value under the given path in the custom resource, the status.selector value in the /scale subresource will default to the empty string. specReplicasPath string specReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale spec.replicas . Only JSON paths without the array notation are allowed. Must be a JSON Path under .spec . If there is no value under the given path in the custom resource, the /scale subresource will return an error on GET. statusReplicasPath string statusReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale status.replicas . Only JSON paths without the array notation are allowed. Must be a JSON Path under .status . If there is no value under the given path in the custom resource, the status.replicas value in the /scale subresource will default to 0. 3.1.14. .spec.versions[].subresources.status Description CustomResourceSubresourceStatus defines how to serve the status subresource for CustomResources. Status is represented by the .status JSON path inside of a CustomResource. When set, * exposes a /status subresource for the custom resource * PUT requests to the /status subresource take a custom resource object, and ignore changes to anything except the status stanza * PUT/POST/PATCH requests to the custom resource ignore changes to the status stanza Type object 3.1.15. .status Description CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition Type object Property Type Description acceptedNames object CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition conditions array conditions indicate state for particular aspects of a CustomResourceDefinition conditions[] object CustomResourceDefinitionCondition contains details for the current condition of this pod. storedVersions array (string) storedVersions lists all versions of CustomResources that were ever persisted. Tracking these versions allows a migration path for stored versions in etcd. The field is mutable so a migration controller can finish a migration to another version (ensuring no old objects are left in storage), and then remove the rest of the versions from this list. Versions may not be removed from spec.versions while they exist in this list. 3.1.16. .status.acceptedNames Description CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition Type object Required plural kind Property Type Description categories array (string) categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like kubectl get all . kind string kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the kind attribute in API calls. listKind string listKind is the serialized kind of the list for this resource. Defaults to "`kind`List". plural string plural is the plural name of the resource to serve. The custom resources are served under /apis/<group>/<version>/... /<plural> . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). Must be all lowercase. shortNames array (string) shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like kubectl get <shortname> . It must be all lowercase. singular string singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased kind . 3.1.17. .status.conditions Description conditions indicate state for particular aspects of a CustomResourceDefinition Type array 3.1.18. .status.conditions[] Description CustomResourceDefinitionCondition contains details for the current condition of this pod. Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. type string type is the type of the condition. Types include Established, NamesAccepted and Terminating. 3.2. API endpoints The following API endpoints are available: /apis/apiextensions.k8s.io/v1/customresourcedefinitions DELETE : delete collection of CustomResourceDefinition GET : list or watch objects of kind CustomResourceDefinition POST : create a CustomResourceDefinition /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions GET : watch individual changes to a list of CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name} DELETE : delete a CustomResourceDefinition GET : read the specified CustomResourceDefinition PATCH : partially update the specified CustomResourceDefinition PUT : replace the specified CustomResourceDefinition /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions/{name} GET : watch changes to an object of kind CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}/status GET : read status of the specified CustomResourceDefinition PATCH : partially update status of the specified CustomResourceDefinition PUT : replace status of the specified CustomResourceDefinition 3.2.1. /apis/apiextensions.k8s.io/v1/customresourcedefinitions HTTP method DELETE Description delete collection of CustomResourceDefinition Table 3.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CustomResourceDefinition Table 3.3. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinitionList schema 401 - Unauthorized Empty HTTP method POST Description create a CustomResourceDefinition Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 202 - Accepted CustomResourceDefinition schema 401 - Unauthorized Empty 3.2.2. /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions HTTP method GET Description watch individual changes to a list of CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead. Table 3.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name} Table 3.8. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition HTTP method DELETE Description delete a CustomResourceDefinition Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CustomResourceDefinition Table 3.11. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CustomResourceDefinition Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CustomResourceDefinition Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty 3.2.4. /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions/{name} Table 3.17. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition HTTP method GET Description watch changes to an object of kind CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}/status Table 3.19. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition HTTP method GET Description read status of the specified CustomResourceDefinition Table 3.20. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CustomResourceDefinition Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CustomResourceDefinition Table 3.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.24. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.25. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/extension_apis/customresourcedefinition-apiextensions-k8s-io-v1
Chapter 20. Troubleshooting: Correcting over-reporting of virtualized RHEL
Chapter 20. Troubleshooting: Correcting over-reporting of virtualized RHEL So that the subscriptions service can accurately report Red Hat Enterprise Linux in virtualized environments such as virtual data center (VDC) subscriptions, host-guest mappings must be present in the data that the subscriptions service analyzes. For Red Hat Satellite, the Satellite inventory upload plugin and the virt-who tool gather these mappings for the subscriptions service. For Red Hat Subscription Management, the virt-who tool gathers these mappings. For each of these subscription management options, all necessary tools must be both installed and properly configured to accurately report virtual RHEL usage. If these tools are not used, virtual usage data cannot be correctly calculated. In that type of scenario, guests are counted, not ignored. Each guest is counted as an individual virtual machine, leading to a rapid escalation of virtualized socket count and unusual deployment over capacity. When multiplied over numerous VDC subscriptions, all running multiple guests, the subscriptions service could easily show RHEL overdeployment that significantly exceeds your subscription threshold. The following example contains an isolated RHEL usage and utilization graph view from the subscriptions service interface. For the first part of the time period, the graph displays physical, virtual, and public cloud usage, but no hypervisor usage is found. For the second part of the time period, where the Satellite inventory upload plugin and virt-who are correctly configured to find and report host-guest mappings, the graph displays new reporting of hypervisor usage and a substantial drop in the reporting of virtual usage. Throughout the time period displayed, the subscription threshold remains constant. Before correction, total RHEL usage exceeds the subscription threshold. After correction, hypervisor usage can be counted and virtual usage is considerably reduced. The total RHEL usage now falls below the subscription threshold. Figure 20.1. Corrected RHEL virtual usage and newly displayed RHEL hypervisor usage with virt-who and Satellite data Procedure To correct over-reporting of virtualized RHEL usage in the subscriptions service, make sure that you have completed the following steps: Review your RHEL subscription profile in Red Hat Satellite or Red Hat Subscription Management and determine which subscriptions require virt-who. In the Satellite Web UI, click Content > Subscriptions . If needed, use the Search field to narrow the list of results. Review the values in the Requires Virt-Who column. If any check box is selected, you must configure virt-who. In the Overview page of the Red Hat Subscription Management Customer Portal interface, click View All Subscriptions . If needed, use filtering to narrow the list of results. Select a subscription name to view the details. If Virt-Who: Required appears in the SKU Details , you must configure virt-who. Confirm that the virt-who tool is deployed on your hypervisors so that host-guest mappings can be communicated. For more information, see the virtualization documentation that is appropriate for your subscription management tool. For Satellite, see the following information as appropriate for your version: Configuring Virtual Machine Subscriptions in Red Hat Satellite For Red Hat Subscription Management, see the following information: Configuring Virtual Machine Subscriptions in Red Hat Subscription Management Note If you are also using simple content access, the workflows for some of the subscription management tools such as virt-who are altered. In addition to reviewing the information in the subscription management documentation, see also the information in the Simple Content Access article. For Satellite, make sure that the Satellite inventory upload plugin is installed and configured to supply data to the subscriptions service.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/proc-trbl-correcting-over-reporting-of-virtualized-rhel_assembly-troubleshooting-common-questions-ctxt
Chapter 10. Preparing for users
Chapter 10. Preparing for users After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including taking steps to prepare for users. 10.1. Understanding identity provider configuration The OpenShift Container Platform control plane includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster. 10.1.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 10.1.2. Supported identity providers You can configure the following types of identity providers: Identity provider Description htpasswd Configure the htpasswd identity provider to validate user names and passwords against a flat file generated using htpasswd . Keystone Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. LDAP Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. Basic authentication Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism. Request header Configure a request-header identity provider to identify users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. GitHub or GitHub Enterprise Configure a github identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server. GitLab Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as an identity provider. Google Configure a google identity provider using Google's OpenID Connect integration . OpenID Connect Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . After you define an identity provider, you can use RBAC to define and apply permissions . 10.1.3. Identity provider parameters The following parameters are common to all identity providers: Parameter Description name The provider name is prefixed to provider user names to form an identity name. mappingMethod Defines how new identities are mapped to users when they log in. Enter one of the following values: claim The default value. Provisions a user with the identity's preferred user name. Fails if a user with that user name is already mapped to another identity. lookup Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users. add Provisions a user with the identity's preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names. Note When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add . 10.1.4. Sample identity provider CR The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider. Sample identity provider CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 An existing secret containing a file generated using htpasswd . 10.2. Using RBAC to define and apply permissions Understand and apply role-based access control. 10.2.1. RBAC overview Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project. Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects. Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action. Authorization is managed using: Authorization object Description Rules Sets of permitted verbs on a set of objects. For example, whether a user or service account can create pods. Roles Collections of rules. You can associate, or bind, users and groups to multiple roles. Bindings Associations between users and/or groups with a role. There are two levels of RBAC roles and bindings that control authorization: RBAC level Description Cluster RBAC Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles. Local RBAC Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles. A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation. This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles. During evaluation, both the cluster role bindings and the local role bindings are used. For example: Cluster-wide "allow" rules are checked. Locally-bound "allow" rules are checked. Deny by default. 10.2.1.1. Default cluster roles OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally. Important It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly. Default cluster role Description admin A project manager. If used in a local binding, an admin has rights to view any resource in the project and modify any resource in the project except for quota. basic-user A user that can get basic information about projects and users. cluster-admin A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. cluster-status A user that can get basic cluster status information. cluster-reader A user that can get or view most of the objects but cannot modify them. edit A user that can modify most objects in a project but does not have the power to view or modify roles or bindings. self-provisioner A user that can create their own projects. view A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings. Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin , plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin . The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below. Warning The get pods/exec , get pods/* , and get * rules grant execution privileges when they are applied to a role. Apply the principle of least privilege and assign only the minimal RBAC rights required for users and agents. For more information, see RBAC rules allow execution privileges . 10.2.1.2. Evaluating authorization OpenShift Container Platform evaluates authorization by using: Identity The user name and list of groups that the user belongs to. Action The action you perform. In most cases, this consists of: Project : The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities. Verb : The action itself: get , list , create , update , delete , deletecollection , or watch . Resource name : The API endpoint that you access. Bindings The full list of bindings, the associations between users or groups with a role. OpenShift Container Platform evaluates authorization by using the following steps: The identity and the project-scoped action is used to find all bindings that apply to the user or their groups. Bindings are used to locate all the roles that apply. Roles are used to find all the rules that apply. The action is checked against each rule to find a match. If no matching rule is found, the action is then denied by default. Tip Remember that users and groups can be associated with, or bound to, multiple roles at the same time. Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with. Important The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin . Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level. 10.2.1.2.1. Cluster role aggregation The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation , where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources. 10.2.2. Projects and namespaces A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces. Namespaces provide a unique scope for: Named resources to avoid basic naming collisions. Delegated management authority to trusted users. The ability to limit community resource consumption. Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users. A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects. Projects can have a separate name , displayName , and description . The mandatory name is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters. The optional displayName is how the project is displayed in the web console (defaults to name ). The optional description can be a more detailed description of the project and is also visible in the web console. Each project scopes its own set of: Object Description Objects Pods, services, replication controllers, etc. Policies Rules for which users can or cannot perform actions on objects. Constraints Quotas for each kind of object that can be limited. Service accounts Service accounts act automatically with designated access to objects in the project. Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects. Developers and administrators can interact with projects by using the CLI or the web console. 10.2.3. Default projects OpenShift Container Platform comes with a number of default projects, and projects starting with openshift- are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. 10.2.4. Viewing cluster roles and bindings You can use the oc CLI to view cluster roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the cluster roles and bindings. Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings. Procedure To view the cluster roles and their associated rule sets: USD oc describe clusterrole.rbac Example output Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*] ... To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles: USD oc describe clusterrolebinding.rbac Example output Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api ... 10.2.5. Viewing local roles and bindings You can use the oc CLI to view local roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the local roles and bindings: Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings. Users with the admin default cluster role bound locally can view and manage roles and bindings in that project. Procedure To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project: USD oc describe rolebinding.rbac To view the local role bindings for a different project, add the -n flag to the command: USD oc describe rolebinding.rbac -n joe-project Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project 10.2.6. Adding roles to users You can use the oc adm administrator CLI to manage the roles and bindings. Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using oc adm policy commands. You can bind any of the default cluster roles to local users or groups in your project. Procedure Add a role to a user in a specific project: USD oc adm policy add-role-to-user <role> <user> -n <project> For example, you can add the admin role to the alice user in joe project by running: USD oc adm policy add-role-to-user admin alice -n joe Tip You can alternatively apply the following YAML to add the role to the user: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice View the local role bindings and verify the addition in the output: USD oc describe rolebinding.rbac -n <project> For example, to view the local role bindings for the joe project: USD oc describe rolebinding.rbac -n joe Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe 1 The alice user has been added to the admins RoleBinding . 10.2.7. Creating a local role You can create a local role for a project and then bind it to a user. Procedure To create a local role for a project, run the following command: USD oc create role <name> --verb=<verb> --resource=<resource> -n <project> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to <project> , the project name For example, to create a local role that allows a user to view pods in the blue project, run the following command: USD oc create role podview --verb=get --resource=pod -n blue To bind the new role to a user, run the following command: USD oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue 10.2.8. Creating a cluster role You can create a cluster role. Procedure To create a cluster role, run the following command: USD oc create clusterrole <name> --verb=<verb> --resource=<resource> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to For example, to create a cluster role that allows a user to view pods, run the following command: USD oc create clusterrole podviewonly --verb=get --resource=pod 10.2.9. Local role binding commands When you manage a user or group's associated roles for local role bindings using the following operations, a project may be specified with the -n flag. If it is not specified, then the current project is used. You can use the following commands for local RBAC management. Table 10.1. Local role binding operations Command Description USD oc adm policy who-can <verb> <resource> Indicates which users can perform an action on a resource. USD oc adm policy add-role-to-user <role> <username> Binds a specified role to specified users in the current project. USD oc adm policy remove-role-from-user <role> <username> Removes a given role from specified users in the current project. USD oc adm policy remove-user <username> Removes specified users and all of their roles in the current project. USD oc adm policy add-role-to-group <role> <groupname> Binds a given role to specified groups in the current project. USD oc adm policy remove-role-from-group <role> <groupname> Removes a given role from specified groups in the current project. USD oc adm policy remove-group <groupname> Removes specified groups and all of their roles in the current project. 10.2.10. Cluster role binding commands You can also manage cluster role bindings using the following operations. The -n flag is not used for these operations because cluster role bindings use non-namespaced resources. Table 10.2. Cluster role binding operations Command Description USD oc adm policy add-cluster-role-to-user <role> <username> Binds a given role to specified users for all projects in the cluster. USD oc adm policy remove-cluster-role-from-user <role> <username> Removes a given role from specified users for all projects in the cluster. USD oc adm policy add-cluster-role-to-group <role> <groupname> Binds a given role to specified groups for all projects in the cluster. USD oc adm policy remove-cluster-role-from-group <role> <groupname> Removes a given role from specified groups for all projects in the cluster. 10.2.11. Creating a cluster admin The cluster-admin role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources. Prerequisites You must have created a user to define as the cluster admin. Procedure Define the user as a cluster admin: USD oc adm policy add-cluster-role-to-user cluster-admin <user> 10.3. The kubeadmin user OpenShift Container Platform creates a cluster administrator, kubeadmin , after the installation process completes. This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program's output. For example: INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> 10.3.1. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system 10.4. Image configuration Understand and configure image registry settings. 10.4.1. Image controller configuration parameters The image.config.openshift.io/cluster resource holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . Its spec offers the following configuration parameters. Note Parameters such as DisableScheduledImport , MaxImagesBulkImportedPerRepository , MaxScheduledImportsPerMinute , ScheduledImageImportMinimumIntervalSeconds , InternalRegistryHostname are not configurable. Parameter Description allowedRegistriesForImport Limits the container image registries from which normal users can import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. Every element of this list contains a location of the registry specified by the registry domain name. domainName : Specifies a domain name for the registry. If the registry uses a non-standard 80 or 443 port, the port should be included in the domain name as well. insecure : Insecure indicates whether the registry is secure or insecure. By default, if not otherwise specified, the registry is assumed to be secure. additionalTrustedCA A reference to a config map containing additional CAs that should be trusted during image stream import , pod image pull , openshift-image-registry pullthrough , and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM-encoded certificate as the value, for each additional registry CA to trust. externalRegistryHostnames Provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The value must be in hostname[:port] format. registrySources Contains configuration that determines how the container runtime should treat individual registries when accessing images for builds and pods. For instance, whether or not to allow insecure access. It does not contain configuration for the internal cluster registry. insecureRegistries : Registries which do not have a valid TLS certificate or only support HTTP connections. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . blockedRegistries : Registries for which image pull and push actions are denied. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are allowed. allowedRegistries : Registries for which image pull and push actions are allowed. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are blocked. containerRuntimeSearchRegistries : Registries for which image pull and push actions are allowed using image short names. All other registries are blocked. Either blockedRegistries or allowedRegistries can be set, but not both. Warning When the allowedRegistries parameter is defined, all registries, including registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. When using the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The status field of the image.config.openshift.io/cluster resource holds observed values from the cluster. Parameter Description internalRegistryHostname Set by the Image Registry Operator, which controls the internalRegistryHostname . It sets the hostname for the default OpenShift image registry. The value must be in hostname[:port] format. For backward compatibility, you can still use the OPENSHIFT_DEFAULT_REGISTRY environment variable, but this setting overrides the environment variable. externalRegistryHostnames Set by the Image Registry Operator, provides the external hostnames for the image registry when it is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The values must be in hostname[:port] format. 10.4.2. Configuring image registry settings You can configure image registry settings by editing the image.config.openshift.io/cluster custom resource (CR). When changes to the registry are applied to the image.config.openshift.io/cluster CR, the Machine Config Operator (MCO) performs the following sequential actions: Cordons the node Applies changes by restarting CRI-O Uncordons the node Note The MCO does not restart nodes when it detects changes. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Image : Holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . 2 allowedRegistriesForImport : Limits the container image registries from which normal users may import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. 3 additionalTrustedCA : A reference to a config map containing additional certificate authorities (CA) that are trusted during image stream import, pod image pull, openshift-image-registry pullthrough, and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM certificate as the value, for each additional registry CA to trust. 4 registrySources : Contains configuration that determines whether the container runtime allows or blocks individual registries when accessing images for builds and pods. Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. You can also define whether or not to allow access to insecure registries or registries that allow registries that use image short names. This example uses the allowedRegistries parameter, which defines the registries that are allowed to be used. The insecure registry insecure.com is also allowed. The registrySources parameter does not contain configuration for the internal cluster registry. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, you must add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. Do not add the registry.redhat.io and quay.io registries to the blockedRegistries list. When using the allowedRegistries , blockedRegistries , or insecureRegistries parameter, you can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . Insecure external registries should be avoided to reduce possible security risks. To check that the changes are applied, list your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.25.4+77bec7a ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.25.4+77bec7a ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.25.4+77bec7a ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.25.4+77bec7a ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.25.4+77bec7a ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.25.4+77bec7a For more information on the allowed, blocked, and insecure registry parameters, see Configuring image registry settings . 10.4.2.1. Configuring additional trust stores for image registry access The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access. Prerequisites The certificate authorities (CA) must be PEM-encoded. Procedure You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries. The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the PEM certificate content is the value, for each additional registry CA to trust. Image registry CA config map example apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . You can configure additional CAs with the following procedure. To configure an additional CA: USD oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config USD oc edit image.config.openshift.io cluster spec: additionalTrustedCA: name: registry-config 10.4.2.2. Configuring image registry repository mirroring Setting up container registry repository mirroring enables you to do the following: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. The attributes of repository mirroring in OpenShift Container Platform include: Image pulls are resilient to registry downtimes. Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a datacenter that is in a disconnected environment. After OpenShift Container Platform installation: Even if you don't configure mirroring during OpenShift Container Platform installation, you can do so later using the ImageContentSourcePolicy object. The following procedure provides a post-installation mirror configuration, where you create an ImageContentSourcePolicy object that identifies: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. Note You can only configure global pull secrets for clusters that have an ImageContentSourcePolicy object. You cannot add a pull secret to a project. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source directory to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy \ docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi8/ubi-minimal image from registry.access.redhat.com . After you create the registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Log in to your OpenShift Container Platform cluster. Create an ImageContentSourcePolicy file (for example, registryrepomirror.yaml ), replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 - example.com/example/ubi-minimal 2 source: registry.access.redhat.com/ubi8/ubi-minimal 3 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 4 - mirrors: - mirror.example.com source: registry.redhat.io 5 - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 6 - mirrors: - mirror.example.net source: registry.example.com/example 7 - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 8 1 Indicates the name of the image registry and repository. 2 Indicates multiple mirror repositories for each target repository. If one mirror is down, the target repository can use another mirror. 3 Indicates the registry and repository containing the content that is mirrored. 4 You can configure a namespace inside a registry to use any image in that namespace. If you use a registry domain as a source, the ImageContentSourcePolicy resource is applied to all repositories from the registry. 5 If you configure the registry name, the ImageContentSourcePolicy resource is applied to all repositories from a source registry to a mirror registry. 6 Pulls the image mirror.example.net/image@sha256:... . 7 Pulls the image myimage in the source registry namespace from the mirror mirror.example.net/myimage@sha256:... . 8 Pulls the image registry.example.com/example/myimage from the mirror registry mirror.example.net/registry-example-com/example/myimage@sha256:... . The ImageContentSourcePolicy resource is applied to all repositories from a source registry to a mirror registry mirror.example.net/registry-example-com . Create the new ImageContentSourcePolicy object: USD oc create -f registryrepomirror.yaml After the ImageContentSourcePolicy object is created, the new settings are deployed to each node and the cluster starts using the mirrored repository for requests to the source repository. To check that the mirrored configuration settings, are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.24.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.24.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.24.0 ip-10-0-147-35.ec2.internal Ready worker 7m v1.24.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.24.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.24.0 The Imagecontentsourcepolicy resource does not restart the nodes. Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Change your root directory to /host : sh-4.2# chroot /host Check the /etc/containers/registries.conf file to make sure the changes were made: sh-4.2# cat /etc/containers/registries.conf Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] short-name-mode = "" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi8/ubi-minimal" mirror-by-digest-only = true [[registry.mirror]] location = "example.io/example/ubi-minimal" [[registry.mirror]] location = "example.com/example/ubi-minimal" [[registry]] prefix = "" location = "registry.example.com" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.net/registry-example-com" [[registry]] prefix = "" location = "registry.example.com/example" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.net" [[registry]] prefix = "" location = "registry.example.com/example/myimage" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.net/image" [[registry]] prefix = "" location = "registry.redhat.io" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.com" [[registry]] prefix = "" location = "registry.redhat.io/openshift4" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.com/redhat" Pull an image digest to the node from the source and check if it is resolved by the mirror. ImageContentSourcePolicy objects support image digests only, not image tags. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format. 10.5. Populating OperatorHub from mirrored Operator catalogs If you mirrored Operator catalogs for use with disconnected clusters, you can populate OperatorHub with the Operators from your mirrored catalogs. You can use the generated manifests from the mirroring process to create the required ImageContentSourcePolicy and CatalogSource objects. 10.5.1. Prerequisites Mirroring Operator catalogs for use with disconnected clusters 10.5.2. Creating the ImageContentSourcePolicy object After mirroring Operator catalog content to your mirror registry, create the required ImageContentSourcePolicy (ICSP) object. The ICSP object configures nodes to translate between the image references stored in Operator manifests and the mirrored registry. Procedure On a host with access to the disconnected cluster, create the ICSP by running the following command to specify the imageContentSourcePolicy.yaml file in your manifests directory: USD oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml where <path/to/manifests/dir> is the path to the manifests directory for your mirrored content. You can now create a CatalogSource object to reference your mirrored index image and Operator content. 10.5.3. Adding a catalog source to a cluster Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface. Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. Prerequisites An index image built and pushed to a registry. Procedure Create a CatalogSource object that references your index image. If you used the oc adm catalog mirror command to mirror your catalog to a target registry, you can use the generated catalogSource.yaml file in your manifests directory as a starting point. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.11 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m 1 If you mirrored content to local files before uploading to a registry, remove any backslash ( / ) characters from the metadata.name field to avoid an "invalid resource name" error when you create the object. 2 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 3 Specify your index image. If you specify a tag after the image name, for example :v4.11 , the catalog source pod uses an image pull policy of Always , meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example @sha256:<id> , the image pull policy is IfNotPresent , meaning the pod pulls the image only if it does not already exist on the node. 4 Specify your name or an organization name publishing the catalog. 5 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. Additional resources Accessing images for Operators from private registries Image template for custom catalog sources Image pull policy 10.6. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a cluster administrator, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces... to make the Operator available to all users and projects. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. 10.6.1. Installing from OperatorHub using the web console You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type jaeger to find the Jaeger Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page: Select one of the following: All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available. A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Select an Update Channel (if more than one is available). Select Automatic or Manual approval strategy, as described earlier. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 10.6.2. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Install the oc command to your local system. Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces , then the openshift-operators namespace already has an appropriate Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one. Note The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode. Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Warning Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: <operatorgroup_name>-admin <operatorgroup_name>-edit <operatorgroup_name>-view When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml : Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate Environment Variables in the container. 8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of Tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Create the Subscription object: USD oc apply -f sub.yaml At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Additional resources About OperatorGroups
[ "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc describe clusterrole.rbac", "Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]", "oc describe clusterrolebinding.rbac", "Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api", "oc describe rolebinding.rbac", "oc describe rolebinding.rbac -n joe-project", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project", "oc adm policy add-role-to-user <role> <user> -n <project>", "oc adm policy add-role-to-user admin alice -n joe", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc describe rolebinding.rbac -n <project>", "oc describe rolebinding.rbac -n joe", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe", "oc create role <name> --verb=<verb> --resource=<resource> -n <project>", "oc create role podview --verb=get --resource=pod -n blue", "oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue", "oc create clusterrole <name> --verb=<verb> --resource=<resource>", "oc create clusterrole podviewonly --verb=get --resource=pod", "oc adm policy add-cluster-role-to-user cluster-admin <user>", "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc delete secrets kubeadmin -n kube-system", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.25.4+77bec7a ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.25.4+77bec7a ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.25.4+77bec7a ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.25.4+77bec7a ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.25.4+77bec7a ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.25.4+77bec7a", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config", "oc edit image.config.openshift.io cluster", "spec: additionalTrustedCA: name: registry-config", "skopeo copy docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 docker://example.io/example/ubi-minimal", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 - example.com/example/ubi-minimal 2 source: registry.access.redhat.com/ubi8/ubi-minimal 3 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 4 - mirrors: - mirror.example.com source: registry.redhat.io 5 - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 6 - mirrors: - mirror.example.net source: registry.example.com/example 7 - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 8", "oc create -f registryrepomirror.yaml", "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.24.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.24.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.24.0 ip-10-0-147-35.ec2.internal Ready worker 7m v1.24.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.24.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.24.0", "oc debug node/ip-10-0-147-35.ec2.internal", "Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`", "sh-4.2# chroot /host", "sh-4.2# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi8/ubi-minimal\" mirror-by-digest-only = true [[registry.mirror]] location = \"example.io/example/ubi-minimal\" [[registry.mirror]] location = \"example.com/example/ubi-minimal\" [[registry]] prefix = \"\" location = \"registry.example.com\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/image\" [[registry]] prefix = \"\" location = \"registry.redhat.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com\" [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com/redhat\"", "sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6", "oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.11 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "oc apply -f sub.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/post-installation_configuration/post-install-preparing-for-users
Appendix G. Data Size Terminology Reference Table
Appendix G. Data Size Terminology Reference Table Table G.1. Data Size Terminology Reference Table Term Abbreviation Size (in bytes) Binary (Bytes) Kibibyte KiB 1024 Mebibyte MiB 1024 2 Gibibyte GiB 1024 3 Tebibyte TiB 1024 4 Pebibyte PiB 1024 5 Exbibyte EiB 1024 6 Zebibyte ZiB 1024 7 Yobibyte YiB 1024 8 Decimal (Bytes) Kilobyte KB 1000 Megabyte MB 1000 2 Gigabyte GB 1000 3 Terabyte TB 1000 4 Petabyte PB 1000 5 Exabyte EB 1000 6 Zettabyte ZB 1000 7 Yottabyte YB 1000 8 Decimal (Bits) Kilobit Kb 125 Megabit Mb 125,000 Gigabit Gb 125,000,000
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/data-size-appendix
Chapter 9. Machine [machine.openshift.io/v1beta1]
Chapter 9. Machine [machine.openshift.io/v1beta1] Description Machine is the Schema for the machines API Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineSpec defines the desired state of Machine status object MachineStatus defines the observed state of Machine 9.1.1. .spec Description MachineSpec defines the desired state of Machine Type object Property Type Description lifecycleHooks object LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. metadata object ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. providerID string ProviderID is the identification ID of the machine provided by the provider. This field must match the provider ID as seen on the node object corresponding to this machine. This field is required by higher level consumers of cluster-api. Example use case is cluster autoscaler with cluster-api as provider. Clean-up logic in the autoscaler compares machines to nodes to find out machines at provider which could not get registered as Kubernetes nodes. With cluster-api as a generic out-of-tree provider for autoscaler, this field is required by autoscaler to be able to have a provider view of the list of machines. Another list of nodes is queried from the k8s apiserver and then a comparison is done to find out unregistered machines and are marked for delete. This field will be set by the actuators and consumed by higher level entities like autoscaler that will be interfacing with cluster-api as generic provider. providerSpec object ProviderSpec details Provider-specific configuration to use during node creation. taints array The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. 9.1.2. .spec.lifecycleHooks Description LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. Type object Property Type Description preDrain array PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. preDrain[] object LifecycleHook represents a single instance of a lifecycle hook preTerminate array PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. preTerminate[] object LifecycleHook represents a single instance of a lifecycle hook 9.1.3. .spec.lifecycleHooks.preDrain Description PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. Type array 9.1.4. .spec.lifecycleHooks.preDrain[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 9.1.5. .spec.lifecycleHooks.preTerminate Description PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. Type array 9.1.6. .spec.lifecycleHooks.preTerminate[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 9.1.7. .spec.metadata Description ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. ownerReferences[] object OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. 9.1.8. .spec.metadata.ownerReferences Description List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. Type array 9.1.9. .spec.metadata.ownerReferences[] Description OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Type object Required apiVersion kind name uid Property Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid string UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids 9.1.10. .spec.providerSpec Description ProviderSpec details Provider-specific configuration to use during node creation. Type object Property Type Description value `` Value is an inlined, serialized representation of the resource configuration. It is recommended that providers maintain their own versioned API types that should be serialized/deserialized from this field, akin to component config. 9.1.11. .spec.taints Description The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints Type array 9.1.12. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 9.1.13. .status Description MachineStatus defines the observed state of Machine Type object Property Type Description addresses array Addresses is a list of addresses assigned to the machine. Queried from cloud provider, if available. addresses[] object NodeAddress contains information for the node's address. conditions array Conditions defines the current state of the Machine conditions[] object Condition defines an observation of a Machine API resource operational state. errorMessage string ErrorMessage will be set in the event that there is a terminal problem reconciling the Machine and will contain a more verbose string suitable for logging and human consumption. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine's spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the Machine object and/or logged in the controller's output. errorReason string ErrorReason will be set in the event that there is a terminal problem reconciling the Machine and will contain a succinct value suitable for machine interpretation. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine's spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the Machine object and/or logged in the controller's output. lastOperation object LastOperation describes the last-operation performed by the machine-controller. This API should be useful as a history in terms of the latest operation performed on the specific machine. It should also convey the state of the latest-operation for example if it is still on-going, failed or completed successfully. lastUpdated string LastUpdated identifies when this status was last observed. nodeRef object NodeRef will point to the corresponding Node if it exists. phase string Phase represents the current phase of machine actuation. One of: Failed, Provisioning, Provisioned, Running, Deleting providerStatus `` ProviderStatus details a Provider-specific status. It is recommended that providers maintain their own versioned API types that should be serialized/deserialized from this field. 9.1.14. .status.addresses Description Addresses is a list of addresses assigned to the machine. Queried from cloud provider, if available. Type array 9.1.15. .status.addresses[] Description NodeAddress contains information for the node's address. Type object Required address type Property Type Description address string The node address. type string Node address type, one of Hostname, ExternalIP or InternalIP. 9.1.16. .status.conditions Description Conditions defines the current state of the Machine Type array 9.1.17. .status.conditions[] Description Condition defines an observation of a Machine API resource operational state. Type object Property Type Description lastTransitionTime string Last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string A human readable message indicating details about the transition. This field may be empty. reason string The reason for the condition's last transition in CamelCase. The specific API may choose whether or not this field is considered a guaranteed API. This field may not be empty. severity string Severity provides an explicit classification of Reason code, so the users or machines can immediately understand the current situation and act accordingly. The Severity field MUST be set only when Status=False. status string Status of the condition, one of True, False, Unknown. type string Type of condition in CamelCase or in foo.example.com/CamelCase. Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. 9.1.18. .status.lastOperation Description LastOperation describes the last-operation performed by the machine-controller. This API should be useful as a history in terms of the latest operation performed on the specific machine. It should also convey the state of the latest-operation for example if it is still on-going, failed or completed successfully. Type object Property Type Description description string Description is the human-readable description of the last operation. lastUpdated string LastUpdated is the timestamp at which LastOperation API was last-updated. state string State is the current status of the last performed operation. E.g. Processing, Failed, Successful etc type string Type is the type of operation which was last performed. E.g. Create, Delete, Update etc 9.1.19. .status.nodeRef Description NodeRef will point to the corresponding Node if it exists. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 9.2. API endpoints The following API endpoints are available: /apis/machine.openshift.io/v1beta1/machines GET : list objects of kind Machine /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machines DELETE : delete collection of Machine GET : list objects of kind Machine POST : create a Machine /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machines/{name} DELETE : delete a Machine GET : read the specified Machine PATCH : partially update the specified Machine PUT : replace the specified Machine /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machines/{name}/status GET : read status of the specified Machine PATCH : partially update status of the specified Machine PUT : replace status of the specified Machine 9.2.1. /apis/machine.openshift.io/v1beta1/machines Table 9.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Machine Table 9.2. HTTP responses HTTP code Reponse body 200 - OK MachineList schema 401 - Unauthorized Empty 9.2.2. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machines Table 9.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Machine Table 9.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Machine Table 9.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.8. HTTP responses HTTP code Reponse body 200 - OK MachineList schema 401 - Unauthorized Empty HTTP method POST Description create a Machine Table 9.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.10. Body parameters Parameter Type Description body Machine schema Table 9.11. HTTP responses HTTP code Reponse body 200 - OK Machine schema 201 - Created Machine schema 202 - Accepted Machine schema 401 - Unauthorized Empty 9.2.3. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machines/{name} Table 9.12. Global path parameters Parameter Type Description name string name of the Machine namespace string object name and auth scope, such as for teams and projects Table 9.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Machine Table 9.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 9.15. Body parameters Parameter Type Description body DeleteOptions schema Table 9.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Machine Table 9.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.18. HTTP responses HTTP code Reponse body 200 - OK Machine schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Machine Table 9.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.20. Body parameters Parameter Type Description body Patch schema Table 9.21. HTTP responses HTTP code Reponse body 200 - OK Machine schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Machine Table 9.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.23. Body parameters Parameter Type Description body Machine schema Table 9.24. HTTP responses HTTP code Reponse body 200 - OK Machine schema 201 - Created Machine schema 401 - Unauthorized Empty 9.2.4. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machines/{name}/status Table 9.25. Global path parameters Parameter Type Description name string name of the Machine namespace string object name and auth scope, such as for teams and projects Table 9.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Machine Table 9.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.28. HTTP responses HTTP code Reponse body 200 - OK Machine schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Machine Table 9.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.30. Body parameters Parameter Type Description body Patch schema Table 9.31. HTTP responses HTTP code Reponse body 200 - OK Machine schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Machine Table 9.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.33. Body parameters Parameter Type Description body Machine schema Table 9.34. HTTP responses HTTP code Reponse body 200 - OK Machine schema 201 - Created Machine schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/machine_apis/machine-machine-openshift-io-v1beta1
2.12. RHEA-2011:0552 - new package: iwl6000g2a-firmware
2.12. RHEA-2011:0552 - new package: iwl6000g2a-firmware A new iwl6000g2a-firmware package that works with the iwlagn driver in the latest Red Hat Enterprise Linux kernels to enable support for Intel Wireless WiFi Link 6005 Series AGN Adapters is now available. iwlagn is a kernel driver module for the Intel Wireless WiFi Link series of devices. The iwlagn driver requires firmware loaded on the device in order to function. This new iwl6000g2a-firmware package provides the firmware required by iwlagn to enable Intel Wireless WiFi Link 6005 Series AGN Adapters. (BZ# 663971 ) All users of the iwlagn driver, especially those requiring iwl6000g2a support, should install this new package, which provides this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/iwl6000g2a-firmware_new
Chapter 9. NUMA
Chapter 9. NUMA Historically, all memory on AMD64 and Intel 64 systems is equally accessible by all CPUs. Known as Uniform Memory Access (UMA), access times are the same no matter which CPU performs the operation. This behavior is no longer the case with recent AMD64 and Intel 64 processors. In Non-Uniform Memory Access (NUMA), system memory is divided across NUMA nodes , which correspond to sockets or to a particular set of CPUs that have identical access latency to the local subset of system memory. This chapter describes memory allocation and NUMA tuning configurations in virtualized environments. 9.1. NUMA Memory Allocation Policies The following policies define how memory is allocated from the nodes in a system: Strict Strict policy means that the allocation will fail if the memory cannot be allocated on the target node. Specifying a NUMA nodeset list without defining a memory mode attribute defaults to strict mode. Interleave Memory pages are allocated across nodes specified by a nodeset, but are allocated in a round-robin fashion. Preferred Memory is allocated from a single preferred memory node. If sufficient memory is not available, memory can be allocated from other nodes. To enable the intended policy, set it as the value of the <memory mode> element of the domain XML file: <numatune> <memory mode=' preferred ' nodeset='0'> </numatune> Important If memory is overcommitted in strict mode and the guest does not have sufficient swap space, the kernel will kill some guest processes to retrieve additional memory. Red Hat recommends using preferred allocation and specifying a single nodeset (for example, nodeset='0') to prevent this situation.
[ "<numatune> <memory mode=' preferred ' nodeset='0'> </numatune>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-Virtualization_Tuning_Optimization_Guide-NUMA
8.4.4. Deactivating KSM
8.4.4. Deactivating KSM Kernel same-page merging (KSM) has a performance overhead which may be too large for certain environments or host systems. KSM may also introduce side channels that could be potentially used to leak information across guests. If this is a concern, KSM can be disabled on per-guest basis. KSM can be deactivated by stopping the ksmtuned and the ksm services. However, this action does not persist after restarting. To deactivate KSM, run the following in a terminal as root: Stopping the ksmtuned and the ksm deactivates KSM, but this action does not persist after restarting. Persistently deactivate KSM with the systemctl commands: When KSM is disabled, any memory pages that were shared prior to deactivating KSM are still shared. To delete all of the PageKSM in the system, use the following command: After this is performed, the khugepaged daemon can rebuild transparent hugepages on the KVM guest physical memory. Using # echo 0 >/sys/kernel/mm/ksm/run stops KSM, but does not unshare all the previously created KSM pages (this is the same as the # systemctl stop ksmtuned command).
[ "systemctl stop ksmtuned Stopping ksmtuned: [ OK ] systemctl stop ksm Stopping ksm: [ OK ]", "systemctl disable ksm systemctl disable ksmtuned", "echo 2 >/sys/kernel/mm/ksm/run" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-ksm-deactivating_ksm
Chapter 2. Image [image.openshift.io/v1]
Chapter 2. Image [image.openshift.io/v1] Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 2.1.1. .dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 2.1.2. .dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 2.1.3. .dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 2.1.4. .dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 2.1.5. .signatures Description Signatures holds all signatures of the image. Type array 2.1.6. .signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 2.1.7. .signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 2.1.8. .signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 2.1.9. .signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 2.1.10. .signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 2.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/images DELETE : delete collection of Image GET : list or watch objects of kind Image POST : create an Image /apis/image.openshift.io/v1/watch/images GET : watch individual changes to a list of Image. deprecated: use the 'watch' parameter with a list operation instead. /apis/image.openshift.io/v1/images/{name} DELETE : delete an Image GET : read the specified Image PATCH : partially update the specified Image PUT : replace the specified Image /apis/image.openshift.io/v1/watch/images/{name} GET : watch changes to an object of kind Image. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/image.openshift.io/v1/images HTTP method DELETE Description delete collection of Image Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Image Table 2.3. HTTP responses HTTP code Reponse body 200 - OK ImageList schema 401 - Unauthorized Empty HTTP method POST Description create an Image Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body Image schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 202 - Accepted Image schema 401 - Unauthorized Empty 2.2.2. /apis/image.openshift.io/v1/watch/images HTTP method GET Description watch individual changes to a list of Image. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/image.openshift.io/v1/images/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the Image HTTP method DELETE Description delete an Image Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Image Table 2.11. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Image Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Image Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body Image schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty 2.2.4. /apis/image.openshift.io/v1/watch/images/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the Image HTTP method GET Description watch changes to an object of kind Image. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/image_apis/image-image-openshift-io-v1
Chapter 6. Senders and receivers
Chapter 6. Senders and receivers The client uses sender and receiver links to represent channels for delivering messages. Senders and receivers are unidirectional, with a source end for the message origin, and a target end for the message destination. Source and targets often point to queues or topics on a message broker. Sources are also used to represent subscriptions. 6.1. Creating queues and topics on demand Some message servers support on-demand creation of queues and topics. When a sender or receiver is attached, the server uses the sender target address or the receiver source address to create a queue or topic with a name matching the address. The message server typically defaults to creating either a queue (for one-to-one message delivery) or a topic (for one-to-many message delivery). The client can indicate which it prefers by setting the queue or topic capability on the source or target. For more details, see the following examples: queue-send.rb queue-receive.rb topic-send.rb topic-receive.rb 6.2. Creating durable subscriptions A durable subscription is a piece of state on the remote server representing a message receiver. Ordinarily, message receivers are discarded when a client closes. However, because durable subscriptions are persistent, clients can detach from them and then re-attach later. Any messages received while detached are available when the client re-attaches. Durable subscriptions are uniquely identified by combining the client container ID and receiver name to form a subscription ID. These must have stable values so that the subscription can be recovered. Example 6.3. Creating shared subscriptions A shared subscription is a piece of state on the remote server representing one or more message receivers. Because it is shared, multiple clients can consume from the same stream of messages. The client configures a shared subscription by setting the shared capability on the receiver source. Shared subscriptions are uniquely identified by combining the client container ID and receiver name to form a subscription ID. These must have stable values so that multiple client processes can locate the same subscription. If the global capability is set in addition to shared , the receiver name alone is used to identify the subscription. Example
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_ruby_client/senders_and_receivers
5.6. Issues with queue_if_no_path feature
5.6. Issues with queue_if_no_path feature If a multipath device is configured with features "1 queue_if_no_path" , then any process that issues I/O will hang until one or more paths are restored. To avoid this, set the no_path_retry N parameter in the /etc/multipath.conf file (where N is the number of times the system should retry a path). If you need to use the features "1 queue_if_no_path" option and you experience the issue noted here, use the dmsetup command to edit the policy at runtime for a particular LUN (that is, for which all the paths are unavailable). For example, if you want to change the policy on the multipath device mpathc from "queue_if_no_path" to "fail_if_no_path" , execute the following command. Note that you must specify the mpath n alias rather than the path.
[ "dmsetup message mpathc 0 \"fail_if_no_path\"" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/queueifnopath_issues
Appendix D. Removing the Red Hat Virtualization Manager
Appendix D. Removing the Red Hat Virtualization Manager You can use the engine-cleanup command to remove specific components or all components of the Red Hat Virtualization Manager. Note A backup of the Manager database and a compressed archive of the PKI keys and configuration are always automatically created. These files are saved under /var/lib/ovirt-engine/backups/ , and include the date and engine- and engine-pki- in their file names respectively. Procedure Run the following command on the Manager machine: You are prompted whether to remove all Red Hat Virtualization Manager components: Type Yes and press Enter to remove all components: Type No and press Enter to select the components to remove. You can select whether to retain or remove each component individually: You are given another opportunity to change your mind and cancel the removal of the Red Hat Virtualization Manager. If you choose to proceed, the ovirt-engine service is stopped, and your environment's configuration is removed in accordance with the options you selected. Remove the Red Hat Virtualization packages:
[ "engine-cleanup", "Do you want to remove all components? (Yes, No) [Yes]:", "Do you want to remove Engine database content? All data will be lost (Yes, No) [No]: Do you want to remove PKI keys? (Yes, No) [No]: Do you want to remove PKI configuration? (Yes, No) [No]: Do you want to remove Apache SSL configuration? (Yes, No) [No]:", "During execution engine service will be stopped (OK, Cancel) [OK]: ovirt-engine is about to be removed, data will be lost (OK, Cancel) [Cancel]:OK", "yum remove rhvm* vdsm-bootstrap" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/removing_red_hat_virtualization_manager_sm_localdb_deploy
Chapter 1. Distributed tracing release notes
Chapter 1. Distributed tracing release notes 1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use distributed tracing for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With distributed tracing you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis Red Hat OpenShift distributed tracing consists of two main components: Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . Important Jaeger does not use FIPS validated cryptographic modules. 1.2. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.3. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.4. New features and enhancements This release adds improvements related to the following components and concepts. 1.4.1. New features and enhancements Red Hat OpenShift distributed tracing 2.8 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.1.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.8 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.42 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.74.0 Tempo Operator Tempo 0.1.0 1.4.2. New features and enhancements Red Hat OpenShift distributed tracing 2.7 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.2.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.7 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.39 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.63.1 1.4.3. New features and enhancements Red Hat OpenShift distributed tracing 2.6 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.3.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.6 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.38 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.60 1.4.4. New features and enhancements Red Hat OpenShift distributed tracing 2.5 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. This release introduces support for ingesting OpenTelemetry protocol (OTLP) to the Red Hat OpenShift distributed tracing platform Operator. The Operator now automatically enables the OTLP ports: Port 4317 is used for OTLP gRPC protocol. Port 4318 is used for OTLP HTTP protocol. This release also adds support for collecting Kubernetes resource attributes to the Red Hat OpenShift distributed tracing data collection Operator. 1.4.4.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.5 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.36 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.56 1.4.5. New features and enhancements Red Hat OpenShift distributed tracing 2.4 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. This release also adds support for auto-provisioning certificates using the Red Hat Elasticsearch Operator. Self-provisioning, which means using the Red Hat OpenShift distributed tracing platform Operator to call the Red Hat Elasticsearch Operator during installation. Self provisioning is fully supported with this release. Creating the Elasticsearch instance and certificates first and then configuring the distributed tracing platform to use the certificate is a Technology Preview for this release. Note When upgrading to Red Hat OpenShift distributed tracing 2.4, the Operator recreates the Elasticsearch instance, which might take five to ten minutes. Distributed tracing will be down and unavailable for that period. 1.4.5.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.4 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.34.1 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.49 1.4.6. New features and enhancements Red Hat OpenShift distributed tracing 2.3.1 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.6.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.3.1 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.30.2 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.44.1-1 1.4.7. New features and enhancements Red Hat OpenShift distributed tracing 2.3.0 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. With this release, the Red Hat OpenShift distributed tracing platform Operator is now installed to the openshift-distributed-tracing namespace by default. Before this update, the default installation had been in the openshift-operators namespace. 1.4.7.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.3.0 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.30.1 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.44.0 1.4.8. New features and enhancements Red Hat OpenShift distributed tracing 2.2.0 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.8.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.2.0 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.30.0 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.42.0 1.4.9. New features and enhancements Red Hat OpenShift distributed tracing 2.1.0 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.9.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.1.0 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.29.1 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.41.1 1.4.10. New features and enhancements Red Hat OpenShift distributed tracing 2.0.0 This release marks the rebranding of Red Hat OpenShift Jaeger to Red Hat OpenShift distributed tracing. This release consists of the following changes, additions, and improvements: Red Hat OpenShift distributed tracing now consists of the following two main components: Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . Updates Red Hat OpenShift distributed tracing platform Operator to Jaeger 1.28. Going forward, Red Hat OpenShift distributed tracing will only support the stable Operator channel. Channels for individual releases are no longer supported. Introduces a new Red Hat OpenShift distributed tracing data collection Operator based on OpenTelemetry 0.33. Note that this Operator is a Technology Preview feature. Adds support for OpenTelemetry protocol (OTLP) to the Query service. Introduces a new distributed tracing icon that appears in the OpenShift OperatorHub. Includes rolling updates to the documentation to support the name change and new features. This release also addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.10.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.0.0 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.28.0 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.33.0 1.5. Red Hat OpenShift distributed tracing Technology Preview Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.5.1. Red Hat OpenShift distributed tracing 2.8.0 Technology Preview This release introduces support for Tempo Operator as a Technology Preview feature for Red Hat OpenShift distributed tracing. The feature uses version 0.1.0 of Tempo Operator and version 2.0.1 of the upstream Tempo components. You can use Tempo Operator to replace Jaeger so that you can use S3-compatible storage instead of ElasticSearch. Most users who use Tempo Operator instead of Jaeger will not notice any difference in functionality because Tempo supports the same ingestion and query protocols as Jaeger and uses the same user interface. If you enable this Technology Preview feature, note the following limitations of the current implementation: Tempo Operator currently does not support disconnected installations. ( TRACING-3145 ) When you use the Jaeger user interface (UI) with Tempo Operator, the Jaeger UI lists only services that have sent traces within the last 15 minutes. For services that have not sent traces within the last 15 minutes, those traces are still stored even though they are not visible in the Jaeger UI. ( TRACING-3139 ) Expanded support for the Tempo Operator is planned for future releases of Red Hat OpenShift distributed tracing. Possible additional features might include support for TLS authentication, multitenancy, and multiple clusters. For more information about the Tempo Operator, see the documentation for the Community Tempo Operator . 1.5.2. Red Hat OpenShift distributed tracing 2.4.0 Technology Preview This release also adds support for auto-provisioning certificates using the Red Hat Elasticsearch Operator. Self-provisioning, which means using the Red Hat OpenShift distributed tracing platform Operator to call the Red Hat Elasticsearch Operator during installation. Self provisioning is fully supported with this release. Creating the Elasticsearch instance and certificates first and then configuring the distributed tracing platform to use the certificate is a Technology Preview for this release. 1.5.3. Red Hat OpenShift distributed tracing 2.2.0 Technology Preview Unsupported OpenTelemetry Collector components included in the 2.1 release have been removed. 1.5.4. Red Hat OpenShift distributed tracing 2.1.0 Technology Preview This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. In the new version, the ca_file moves under tls in the custom resource, as shown in the following examples. CA file configuration for OpenTelemetry version 0.33 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" CA file configuration for OpenTelemetry version 0.41.1 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" 1.5.5. Red Hat OpenShift distributed tracing 2.0.0 Technology Preview This release includes the addition of the Red Hat OpenShift distributed tracing data collection, which you install using the Red Hat OpenShift distributed tracing data collection Operator. Red Hat OpenShift distributed tracing data collection is based on the OpenTelemetry APIs and instrumentation. Red Hat OpenShift distributed tracing data collection includes the OpenTelemetry Operator and Collector. The Collector can be used to receive traces in either the OpenTelemetry or Jaeger protocol and send the trace data to Red Hat OpenShift distributed tracing. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling. 1.6. Red Hat OpenShift distributed tracing known issues These limitations exist in Red Hat OpenShift distributed tracing: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on IBM Z and IBM Power Systems. These are the known issues for Red Hat OpenShift distributed tracing: OBSDA-220 In some cases, if you try to pull an image using distributed tracing data collection, the image pull fails and a Failed to pull image error message appears. There is no workaround for this issue. TRACING-2057 The Kafka API has been updated to v1beta2 to support the Strimzi Kafka Operator 0.23.0. However, this API version is not supported by AMQ Streams 1.6.3. If you have the following environment, your Jaeger services will not be upgraded, and you cannot create new Jaeger services or modify existing Jaeger services: Jaeger Operator channel: 1.17.x stable or 1.20.x stable AMQ Streams Operator channel: amq-streams-1.6.x To resolve this issue, switch the subscription channel for your AMQ Streams Operator to either amq-streams-1.7.x or stable . 1.7. Red Hat OpenShift distributed tracing fixed issues OSSM-1910 Because of an issue introduced in version 2.6, TLS connections could not be established with OpenShift Container Platform Service Mesh. This update resolves the issue by changing the service port names to match conventions used by OpenShift Container Platform Service Mesh and Istio. OBSDA-208 Before this update, the default 200m CPU and 256Mi memory resource limits could cause distributed tracing data collection to restart continuously on large clusters. This update resolves the issue by removing these resource limits. OBSDA-222 Before this update, spans could be dropped in the OpenShift Container Platform distributed tracing platform. To help prevent this issue from occurring, this release updates version dependencies. TRACING-2337 Jaeger is logging a repetitive warning message in the Jaeger logs similar to the following: {"level":"warn","ts":1642438880.918793,"caller":"channelz/logging.go:62","msg":"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.HandleStreams received bogus greeting from client: \\\"\\\\x16\\\\x03\\\\x01\\\\x02\\\\x00\\\\x01\\\\x00\\\\x01\\\\xfc\\\\x03\\\\x03vw\\\\x1a\\\\xc9T\\\\xe7\\\\xdaCj\\\\xb7\\\\x8dK\\\\xa6\\\"\"","system":"grpc","grpc_log":true} This issue was resolved by exposing only the HTTP(S) port of the query service, and not the gRPC port. TRACING-2009 The Jaeger Operator has been updated to include support for the Strimzi Kafka Operator 0.23.0. TRACING-1907 The Jaeger agent sidecar injection was failing due to missing config maps in the application namespace. The config maps were getting automatically deleted due to an incorrect OwnerReference field setting and as a result, the application pods were not moving past the "ContainerCreating" stage. The incorrect settings have been removed. TRACING-1725 Follow-up to TRACING-1631. Additional fix to ensure that Elasticsearch certificates are properly reconciled when there are multiple Jaeger production instances, using same name but within different namespaces. See also BZ-1918920 . TRACING-1631 Multiple Jaeger production instances, using same name but within different namespaces, causing Elasticsearch certificate issue. When multiple service meshes were installed, all of the Jaeger Elasticsearch instances had the same Elasticsearch secret instead of individual secrets, which prevented the OpenShift Elasticsearch Operator from communicating with all of the Elasticsearch clusters. TRACING-1300 Failed connection between Agent and Collector when using Istio sidecar. An update of the Jaeger Operator enabled TLS communication by default between a Jaeger sidecar agent and the Jaeger Collector. TRACING-1208 Authentication "500 Internal Error" when accessing Jaeger UI. When trying to authenticate to the UI using OAuth, I get a 500 error because oauth-proxy sidecar doesn't trust the custom CA bundle defined at installation time with the additionalTrustBundle . TRACING-1166 It is not currently possible to use the Jaeger streaming strategy within a disconnected environment. When a Kafka cluster is being provisioned, it results in a error: Failed to pull image registry.redhat.io/amq7/amq-streams-kafka-24-rhel7@sha256:f9ceca004f1b7dccb3b82d9a8027961f9fe4104e0ed69752c0bdd8078b4a1076 . TRACING-809 Jaeger Ingester is incompatible with Kafka 2.3. When there are two or more instances of the Jaeger Ingester and enough traffic it will continuously generate rebalancing messages in the logs. This is due to a regression in Kafka 2.3 that was fixed in Kafka 2.3.1. For more information, see Jaegertracing-1819 . BZ-1918920 / LOG-1619 The Elasticsearch pods does not get restarted automatically after an update. Workaround: Restart the pods manually.
[ "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"", "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"", "{\"level\":\"warn\",\"ts\":1642438880.918793,\"caller\":\"channelz/logging.go:62\",\"msg\":\"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \\\"transport: http2Server.HandleStreams received bogus greeting from client: \\\\\\\"\\\\\\\\x16\\\\\\\\x03\\\\\\\\x01\\\\\\\\x02\\\\\\\\x00\\\\\\\\x01\\\\\\\\x00\\\\\\\\x01\\\\\\\\xfc\\\\\\\\x03\\\\\\\\x03vw\\\\\\\\x1a\\\\\\\\xc9T\\\\\\\\xe7\\\\\\\\xdaCj\\\\\\\\xb7\\\\\\\\x8dK\\\\\\\\xa6\\\\\\\"\\\"\",\"system\":\"grpc\",\"grpc_log\":true}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/distributed_tracing/distr-tracing-release-notes
Deploying OpenShift Data Foundation using Google Cloud
Deploying OpenShift Data Foundation using Google Cloud Red Hat OpenShift Data Foundation 4.17 Instructions on deploying OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Google Cloud. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) Google Cloud clusters. Note Only internal OpenShift Data Foundation clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation in internal mode, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and follow the appropriate deployment process based on your requirement: Deploy OpenShift Data Foundation on Google Cloud Deploy standalone Multicloud Object Gateway component Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Chapter 2. Deploying OpenShift Data Foundation on Google Cloud You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Google Cloud installer-provisioned infrastructure. This enables you to create internal cluster resources and it results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Note Only internal OpenShift Data Foundation clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling key rotation when using KMS Security common practices require periodic encryption key rotation. You can enable key rotation when using KMS using this procedure. To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to either Namespace , StorageClass , or PersistentVolumeClaims (in order of precedence). <value> can be either @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . Be aware that the default storage class of the Google Cloud platform uses hard disk drive (HDD). To use solid state drive (SSD) based disks for better performance, you need to create a storage class, using pd-ssd as shown in the following ssd-storeageclass.yaml example: Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set as standard . However, if you created a storage class to use SSD based disks for better performance, you need to select that storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.5.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) * ocs-client-operator -* (1 pod on any storage node) ocs-client-operator-console -* (1 pod on any storage node) ocs-provider-server -* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.5.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation .
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "oc get namespace default NAME STATUS AGE default Active 5d2h", "oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated", "oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h", "oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated", "oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: faster provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete", "oc annotate namespace openshift-storage openshift.io/node-selector=" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/deploying_openshift_data_foundation_using_google_cloud/index
Chapter 7. Uninstalling OpenShift Data Foundation
Chapter 7. Uninstalling OpenShift Data Foundation 7.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_on_vmware_vsphere/uninstalling_openshift_data_foundation
Chapter 4. Configuring Static Routes and the Default Gateway
Chapter 4. Configuring Static Routes and the Default Gateway This chapter covers the configuration of static routes and the default gateway. 4.1. Introduction to Understanding Routing and Gateway Routing is a mechanism that allows a system to find the network path to another system. Routing is often handled by devices on the network dedicated to routing (although any device can be configured to perform routing). Therefore, it is often not necessary to configure static routes on Red Hat Enterprise Linux servers or clients. Exceptions include traffic that must pass through an encrypted VPN tunnel or traffic that should take a specific route for reasons of cost or security. A host's routing table will be automatically populated with routes to directly connected networks. The routes examine when the network interfaces are " up " . In order to reach a remote network or host, the system is given the address of a gateway to which traffic should be sent. When a host's interface is configured by DHCP , an address of a gateway that leads to an upstream network or the Internet is usually assigned. This gateway is usually referred to as the default gateway as it is the gateway to use if no better route is known to the system (and present in the routing table). Network administrators often use the first or last host IP address in the network as the gateway address; for example, 192.168.10.1 or 192.168.10.254 . Not to be confused by the address which represents the network itself; in this example, 192.168.10.0 , or the subnet's broadcast address; in this example 192.168.10.255 . The default gateway is traditionally a network router. The default gateway is for any and all traffic which is not destined for the local network and for which no preferred route is specified in the routing table. Note To expand your expertise, you might also be interested in the Red Hat System Administration I (RH124) training course.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/ch-configuring_static_routes_and_the_default_gateway
Chapter 38. Customizing How Types are Generated
Chapter 38. Customizing How Types are Generated Abstract The default JAXB mappings address most of the cases encountered when using XML Schema to define the objects for a Java application. For instances where the default mappings are insufficient, JAXB provides an extensive customization mechanism. 38.1. Basics of Customizing Type Mappings Overview The JAXB specification defines a number of XML elements that customize how Java types are mapped to XML Schema constructs. These elements can be specified in-line with XML Schema constructs. If you cannot, or do not want to, modify the XML Schema definitions, you can specify the customizations in external binding document. Namespace The elements used to customize the JAXB data bindings are defined in the namespace http://java.sun.com/xml/ns/jaxb . You must add a namespace declaration similar to the one shown in Example 38.1, "JAXB Customization Namespace" . This is added to the root element of all XML documents defining JAXB customizations. Example 38.1. JAXB Customization Namespace Version declaration When using the JAXB customizations, you must indicate the JAXB version being used. This is done by adding a jaxb:version attribute to the root element of the external binding declaration. If you are using in-line customization, you must include the jaxb:version attribute in the schema element containing the customizations. The value of the attribute is always 2.0 . Example 38.2, "Specifying the JAXB Customization Version" shows an example of the jaxb:version attribute used in a schema element. Example 38.2. Specifying the JAXB Customization Version Using in-line customization The most direct way to customize how the code generators map XML Schema constructs to Java constructs is to add the customization elements directly to the XML Schema definitions. The JAXB customization elements are placed inside the xsd:appinfo element of the XML schema construct that is being modified. Example 38.3, "Customized XML Schema" shows an example of a schema containing an in-line JAXB customization. Example 38.3. Customized XML Schema Using an external binding declaration When you cannot, or do not want to, make changes to the XML Schema document that defines your type, you can specify the customizations using an external binding declaration. An external binding declaration consists of a number of nested jaxb:bindings elements. Example 38.4, "JAXB External Binding Declaration Syntax" shows the syntax of an external binding declaration. Example 38.4. JAXB External Binding Declaration Syntax The schemaLocation attribute and the wsdlLocation attribute are used to identify the schema document to which the modifications are applied. Use the schemaLocation attribute if you are generating code from a schema document. Use the wsdlLocation attribute if you are generating code from a WSDL document. The node attribute is used to identify the specific XML schema construct that is to be modified. It is an XPath statement that resolves to an XML Schema element. Given the schema document widgetSchema.xsd , shown in Example 38.5, "XML Schema File" , the external binding declaration shown in Example 38.6, "External Binding Declaration" modifies the generation of the complex type size . Example 38.5. XML Schema File Example 38.6. External Binding Declaration To instruct the code generators to use the external binging declaration use the wsdl2java tool's -b binding-file option, as shown below: 38.2. Specifying the Java Class of an XML Schema Primitive Overview By default, XML Schema types are mapped to Java primitive types. While this is the most logical mapping between XML Schema and Java, it does not always meet the requirements of the application developer. You might want to map an XML Schema primitive type to a Java class that can hold extra information, or you might want to map an XML primitive type to a class that allows for simple type substitution. The JAXB javaType customization element allows you to customize the mapping between an XML Schema primitive type and a Java primitive type. It can be used to customize the mappings at both the global level and the individual instance level. You can use the javaType element as part of a simple type definition or as part of a complex type definition. When using the javaType customization element you must specify methods for converting the XML representation of the primitive type to and from the target Java class. Some mappings have default conversion methods. For instances where there are no default mappings, Apache CXF provides JAXB methods to ease the development of the required methods. Syntax The javaType customization element takes four attributes, as described in Table 38.1, "Attributes for Customizing the Generation of a Java Class for an XML Schema Type" . Table 38.1. Attributes for Customizing the Generation of a Java Class for an XML Schema Type Attribute Required Description name Yes Specifies the name of the Java class to which the XML Schema primitive type is mapped. It must be either a valid Java class name or the name of a Java primitive type. You must ensure that this class exists and is accessible to your application. The code generator does not check for this class. xmlType No Specifies the XML Schema primitive type that is being customized. This attribute is only used when the javaType element is used as a child of the globalBindings element. parseMethod No Specifies the method responsible for parsing the string-based XML representation of the data into an instance of the Java class. For more information see the section called "Specifying the converters" . printMethod No Specifies the method responsible for converting a Java object to the string-based XML representation of the data. For more information see the section called "Specifying the converters" . The javaType customization element can be used in three ways: To modify all instances of an XML Schema primitive type - The javaType element modifies all instances of an XML Schema type in the schema document when it is used as a child of the globalBindings customization element. When it is used in this manner, you must specify a value for the xmlType attribute that identifies the XML Schema primitive type being modified. Example 38.7, "Global Primitive Type Customization" shows an in-line global customization that instructs the code generators to use java.lang.Integer for all instances of xsd:short in the schema. Example 38.7. Global Primitive Type Customization To modify a simple type definition - The javaType element modifies the class generated for all instances of an XML simple type when it is applied to a named simple type definition. When using the javaType element to modify a simple type definition, do not use the xmlType attribute. Example 38.8, "Binding File for Customizing a Simple Type" shows an external binding file that modifies the generation of a simple type named zipCode . Example 38.8. Binding File for Customizing a Simple Type To modify an element or attribute of a complex type definition - The javaType can be applied to individual parts of a complex type definition by including it as part of a JAXB property customization. The javaType element is placed as a child to the property's baseType element. When using the javaType element to modify a specific part of a complex type definition, do not use the xmlType attribute. Example 38.9, "Binding File for Customizing an Element in a Complex Type" shows a binding file that modifies an element of a complex type. Example 38.9. Binding File for Customizing an Element in a Complex Type For more information on using the baseType element see Section 38.6, "Specifying the Base Type of an Element or an Attribute" . Specifying the converters The Apache CXF cannot convert XML Schema primitive types into random Java classes. When you use the javaType element to customize the mapping of an XML Schema primitive type, the code generator creates an adapter class that is used to marshal and unmarshal the customized XML Schema primitive type. A sample adapter class is shown in Example 38.10, "JAXB Adapter Class" . Example 38.10. JAXB Adapter Class parseMethod and printMethod are replaced by the value of the corresponding parseMethod attribute and printMethod attribute. The values must identify valid Java methods. You can specify the method's name in one of two ways: A fully qualified Java method name in the form of packagename . ClassName . methodName A simple method name in the form of methodName When you only provide a simple method name, the code generator assumes that the method exists in the class specified by the javaType element's name attribute. Important The code generators do not generate parse or print methods. You are responsible for supplying them. For information on developing parse and print methods see the section called "Implementing converters" . If a value for the parseMethod attribute is not provided, the code generator assumes that the Java class specified by the name attribute has a constructor whose first parameter is a Java String object. The generated adapter's unmarshal() method uses the assumed constructor to populate the Java object with the XML data. If a value for the printMethod attribute is not provided, the code generator assumes that the Java class specified by the name attribute has a toString() method. The generated adapter's marshal() method uses the assumed toString() method to convert the Java object to XML data. If the javaType element's name attribute specifies a Java primitive type, or one of the Java primitive's wrapper types, the code generators use the default converters. For more information on default converters see the section called "Default primitive type converters" . What is generated As mentioned in the section called "Specifying the converters" , using the javaType customization element triggers the generation of one adapter class for each customization of an XML Schema primitive type. The adapters are named in sequence using the pattern Adapter N . If you specify two primitive type customizations, the code generators create two adapter classes: Adapter1 and Adapter2 . The code generated for an XML schema construct depends on whether the effected XML Schema construct is a globally defined element or is defined as part of a complex type. When the XML Schema construct is a globally defined element, the object factory method generated for the type is modified from the default method as follows: The method is decorated with an @XmlJavaTypeAdapter annotation. The annotation instructs the runtime which adapter class to use when processing instances of this element. The adapter class is specified as a class object. The default type is replaced by the class specified by the javaType element's name attribute. Example 38.11, "Customized Object Factory Method for a Global Element" shows the object factory method for an element affected by the customization shown in Example 38.7, "Global Primitive Type Customization" . Example 38.11. Customized Object Factory Method for a Global Element When the XML Schema construct is defined as part of a complex type, the generated Java property is modified as follows: The property is decorated with an @XmlJavaTypeAdapter annotation. The annotation instructs the runtime which adapter class to use when processing instances of this element. The adapter class is specified as a class object. The property's @XmlElement includes a type property. The value of the type property is the class object representing the generated object's default base type. In the case of XML Schema primitive types, the class is String . The property is decorated with an @XmlSchemaType annotation. The annotation identifies the XML Schema primitive type of the construct. The default type is replaced by the class specified by the javaType element's name attribute. Example 38.12, "Customized Complex Type" shows the object factory method for an element affected by the customization shown in Example 38.7, "Global Primitive Type Customization" . Example 38.12. Customized Complex Type Implementing converters The Apache CXF runtime does not know how to convert XML primitive types to and from the Java class specified by the javaType element, except that it should call the methods specified by the parseMethod attribute and the printMethod attribute. You are responsible for providing implementations of the methods the runtime calls. The implemented methods must be capable of working with the lexical structures of the XML primitive type. To simplify the implementation of the data conversion methods, Apache CXF provides the javax.xml.bind.DatatypeConverter class. This class provides methods for parsing and printing all of the XML Schema primitive types. The parse methods take string representations of the XML data and they return an instance of the default type defined in Table 34.1, "XML Schema Primitive Type to Java Native Type Mapping" . The print methods take an instance of the default type and they return a string representation of the XML data. The Java documentation for the DatatypeConverter class can be found at https://docs.oracle.com/javase/8/docs/api/javax/xml/bind/DatatypeConverter.html . Default primitive type converters When specifying a Java primitive type, or one of the Java primitive type Wrapper classes, in the javaType element's name attribute, it is not necessary to specify values for the parseMethod attribute or the printMethod attribute. The Apache CXF runtime substitutes default converters if no values are provided. The default data converters use the JAXB DatatypeConverter class to parse the XML data. The default converters will also provide any type casting necessary to make the conversion work. 38.3. Generating Java Classes for Simple Types Overview By default, named simple types do not result in generated types unless they are enumerations. Elements defined using a simple type are mapped to properties of a Java primitive type. There are instances when you need to have simple types generated into Java classes, such as is when you want to use type substitution. To instruct the code generators to generate classes for all globally defined simple types, set the globalBindings customization element's mapSimpleTypeDef to true . Adding the customization To instruct the code generators to create Java classes for named simple types add the globalBinding element's mapSimpleTypeDef attribute and set its value to true . Example 38.13, "in-Line Customization to Force Generation of Java Classes for SimpleTypes" shows an in-line customization that forces the code generator to generate Java classes for named simple types. Example 38.13. in-Line Customization to Force Generation of Java Classes for SimpleTypes Example 38.14, "Binding File to Force Generation of Constants" shows an external binding file that customizes the generation of simple types. Example 38.14. Binding File to Force Generation of Constants Important This customization only affects named simple types that are defined in the global scope. Generated classes The class generated for a simple type has one property called value . The value property is of the Java type defined by the mappings in Section 34.1, "Primitive Types" . The generated class has a getter and a setter for the value property. Example 38.16, "Customized Mapping of a Simple Type" shows the Java class generated for the simple type defined in Example 38.15, "Simple Type for Customized Mapping" . Example 38.15. Simple Type for Customized Mapping Example 38.16. Customized Mapping of a Simple Type 38.4. Customizing Enumeration Mapping Overview If you want enumerated types that are based on a schema type other than xsd:string , you must instruct the code generator to map it. You can also control the name of the generated enumeration constants. The customization is done using the jaxb:typesafeEnumClass element along with one or more jaxb:typesafeEnumMember elements. There might also be instances where the default settings for the code generator cannot create valid Java identifiers for all of the members of an enumeration. You can customize how the code generators handle this by using an attribute of the globalBindings customization. Member name customizer If the code generator encounters a naming collision when generating the members of an enumeration or if it cannot create a valid Java identifier for a member of the enumeration, the code generator, by default, generates a warning and does not generate a Java enum type for the enumeration. You can alter this behavior by adding the globalBinding element's typesafeEnumMemberName attribute. The typesafeEnumMemberName attribute's values are described in Table 38.2, "Values for Customizing Enumeration Member Name Generation" . Table 38.2. Values for Customizing Enumeration Member Name Generation Value Description skipGeneration (default) Specifies that the Java enum type is not generated and generates a warning. generateName Specifies that member names will be generated following the pattern VALUE_ N . N starts off at one, and is incremented for each member of the enumeration. generateError Specifies that the code generator generates an error when it cannot map an enumeration to a Java enum type. Example 38.17, "Customization to Force Type Safe Member Names" shows an in-line customization that forces the code generator to generate type safe member names. Example 38.17. Customization to Force Type Safe Member Names Class customizer The jaxb:typesafeEnumClass element specifies that an XML Schema enumeration should be mapped to a Java enum type. It has two attributes that are described in Table 38.3, "Attributes for Customizing a Generated Enumeration Class" . When the jaxb:typesafeEnumClass element is specified in-line, it must be placed inside the xsd:annotation element of the simple type it is modifying. Table 38.3. Attributes for Customizing a Generated Enumeration Class Attribute Description name Specifies the name of the generated Java enum type. This value must be a valid Java identifier. map Specifies if the enumeration should be mapped to a Java enum type. The default value is true . Member customizer The jaxb:typesafeEnumMember element specifies the mapping between an XML Schema enumeration facet and a Java enum type constant. You must use one jaxb:typesafeEnumMember element for each enumeration facet in the enumeration being customized. When using in-line customization, this element can be used in one of two ways: It can be placed inside the xsd:annotation element of the enumeration facet it is modifying. They can all be placed as children of the jaxb:typesafeEnumClass element used to customize the enumeration. The jaxb:typesafeEnumMember element has a name attribute that is required. The name attribute specifies the name of the generated Java enum type constant. It's value must be a valid Java identifier. The jaxb:typesafeEnumMember element also has a value attribute. The value is used to associate the enumeration facet with the proper jaxb:typesafeEnumMember element. The value of the value attribute must match one of the values of an enumeration facets' value attribute. This attribute is required when you use an external binding specification for customizing the type generation, or when you group the jaxb:typesafeEnumMember elements as children of the jaxb:typesafeEnumClass element. Examples Example 38.18, "In-line Customization of an Enumerated Type" shows an enumerated type that uses in-line customization and has the enumeration's members customized separately. Example 38.18. In-line Customization of an Enumerated Type Example 38.19, "In-line Customization of an Enumerated Type Using a Combined Mapping" shows an enumerated type that uses in-line customization and combines the member's customization in the class customization. Example 38.19. In-line Customization of an Enumerated Type Using a Combined Mapping Example 38.20, "Binding File for Customizing an Enumeration" shows an external binding file that customizes an enumerated type. Example 38.20. Binding File for Customizing an Enumeration 38.5. Customizing Fixed Value Attribute Mapping Overview By default, the code generators map attributes defined as having a fixed value to normal properties. When using schema validation, Apache CXF can enforce the schema definition (see Section 24.3.4.7, "Schema Validation Type Values" ). However, using schema validation increases message processing time. Another way to map attributes that have fixed values to Java is to map them to Java constants. You can instruct the code generator to map fixed value attributes to Java constants using the globalBindings customization element. You can also customize the mapping of fixed value attributes to Java constants at a more localized level using the property element. Global customization You can alter this behavior by adding the globalBinding element's fixedAttributeAsConstantProperty attribute. Setting this attribute to true instructs the code generator to map any attribute defined using fixed attribute to a Java constant. Example 38.21, "in-Line Customization to Force Generation of Constants" shows an in-line customization that forces the code generator to generate constants for attributes with fixed values. Example 38.21. in-Line Customization to Force Generation of Constants Example 38.22, "Binding File to Force Generation of Constants" shows an external binding file that customizes the generation of fixed attributes. Example 38.22. Binding File to Force Generation of Constants Local mapping You can customize attribute mapping on a per-attribute basis using the property element's fixedAttributeAsConstantProperty attribute. Setting this attribute to true instructs the code generator to map any attribute defined using fixed attribute to a Java constant. Example 38.23, "In-Line Customization to Force Generation of Constants" shows an in-line customization that forces the code generator to generate constants for a single attribute with a fixed value. Example 38.23. In-Line Customization to Force Generation of Constants Example 38.24, "Binding File to Force Generation of Constants" shows an external binding file that customizes the generation of a fixed attribute. Example 38.24. Binding File to Force Generation of Constants Java mapping In the default mapping, all attributes are mapped to standard Java properties with getter and setter methods. When this customization is applied to an attribute defined using the fixed attribute, the attribute is mapped to a Java constant, as shown in Example 38.25, "Mapping of a Fixed Value Attribute to a Java Constant" . Example 38.25. Mapping of a Fixed Value Attribute to a Java Constant type is determined by mapping the base type of the attribute to a Java type using the mappings described in Section 34.1, "Primitive Types" . NAME is determined by converting the value of the attribute element's name attribute to all capital letters. value is determined by the value of the attribute element's fixed attribute. For example, the attribute defined in Example 38.23, "In-Line Customization to Force Generation of Constants" is mapped as shown in Example 38.26, "Fixed Value Attribute Mapped to a Java Constant" . Example 38.26. Fixed Value Attribute Mapped to a Java Constant 38.6. Specifying the Base Type of an Element or an Attribute Overview Occasionally you need to customize the class of the object generated for an element, or for an attribute defined as part of an XML Schema complex type. For example, you might want to use a more generalized class of object to allow for simple type substitution. One way to do this is to use the JAXB base type customization. It allows a developer, on a case by case basis, to specify the class of object generated to represent an element or an attribute. The base type customization allows you to specify an alternate mapping between the XML Schema construct and the generated Java object. This alternate mapping can be a simple specialization or a generalization of the default base class. It can also be a mapping of an XML Schema primitive type to a Java class. Customization usage To apply the JAXB base type property to an XML Schema construct use the JAXB baseType customization element. The baseType customization element is a child of the JAXB property element, so it must be properly nested. Depending on how you want to customize the mapping of the XML Schema construct to Java object, you add either the baseType customization element's name attribute, or a javaType child element. The name attribute is used to map the default class of the generated object to another class within the same class hierarchy. The javaType element is used when you want to map XML Schema primitive types to a Java class. Important You cannot use both the name attribute and a javaType child element in the same baseType customization element. Specializing or generalizing the default mapping The baseType customization element's name attribute is used to redefine the class of the generated object to a class within the same Java class hierarchy. The attribute specifies the fully qualified name of the Java class to which the XML Schema construct is mapped. The specified Java class must be either a super-class or a sub-class of the Java class that the code generator normally generates for the XML Schema construct. For XML Schema primitive types that map to Java primitive types, the wrapper class is used as the default base class for the purpose of customization. For example, an element defined as being of xsd:int uses java.lang.Integer as its default base class. The value of the name attribute can specify any super-class of Integer such as Number or Object . For simple type substitution, the most common customization is to map the primitive types to an Object object. Example 38.27, "In-Line Customization of a Base Type" shows an in-line customization that maps one element in a complex type to a Java Object object. Example 38.27. In-Line Customization of a Base Type Example 38.28, "External Binding File to Customize a Base Type" shows an external binding file for the customization shown in Example 38.27, "In-Line Customization of a Base Type" . Example 38.28. External Binding File to Customize a Base Type The resulting Java object's @XmlElement annotation includes a type property. The value of the type property is the class object representing the generated object's default base type. In the case of XML Schema primitive types, the class is the wrapper class of the corresponding Java primitive type. Example 38.29, "Java Class with a Modified Base Class" shows the class generated based on the schema definition in Example 38.28, "External Binding File to Customize a Base Type" . Example 38.29. Java Class with a Modified Base Class Usage with javaType The javaType element can be used to customize how elements and attributes defined using XML Schema primitive types are mapped to Java objects. Using the javaType element provides a lot more flexibility than simply using the baseType element's name attribute. The javaType element allows you to map a primitive type to any class of object. For a detailed description of using the javaType element, see Section 38.2, "Specifying the Java Class of an XML Schema Primitive" .
[ "xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\"", "< schema jaxb:version=\"2.0\">", "<schema targetNamespace=\"http://widget.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" jaxb:version=\"2.0\"> <complexType name=\"size\"> <annotation> <appinfo> <jaxb:class name=\"widgetSize\" /> </appinfo> </annotation> <sequence> <element name=\"longSize\" type=\"xsd:string\" /> <element name=\"numberSize\" type=\"xsd:int\" /> </sequence> </complexType> <schema>", "<jaxb:bindings xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" jaxb:version=\"2.0\"> <jaxb:bindings [schemaLocation=\" schemaUri \" | wsdlLocation=\" wsdlUri \"> <jaxb:bindings node=\" nodeXPath \"> binding declaration </jaxb:bindings> </jaxb:bindings> <jaxb:bindings>", "<schema targetNamespace=\"http://widget.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" version=\"1.0\"> <complexType name=\"size\"> <sequence> <element name=\"longSize\" type=\"xsd:string\" /> <element name=\"numberSize\" type=\"xsd:int\" /> </sequence> </complexType> <schema>", "<jaxb:bindings xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" jaxb:version=\"2.0\"> <jaxb:bindings schemaLocation=\"wsdlSchema.xsd\"> <jaxb:bindings node=\"xsd:complexType[@name='size']\"> <jaxb:class name=\"widgetSize\" /> </jaxb:bindings> </jaxb:bindings> <jaxb:bindings>", "wsdl2java -b widgetBinding.xml widget.wsdl", "<schema targetNamespace=\"http://widget.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" jaxb:version=\"2.0\"> <annotation> <appinfo> <jaxb:globalBindings ...> <jaxb:javaType name=\"java.lang.Integer\" xmlType=\"xsd:short\" /> </globalBindings </appinfo> </annotation> </schema>", "<jaxb:bindings xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" jaxb:version=\"2.0\"> <jaxb:bindings wsdlLocation=\"widgets.wsdl\"> <jaxb:bindings node=\"xsd:simpleType[@name='zipCode']\"> <jaxb:javaType name=\"com.widgetVendor.widgetTypes.zipCodeType\" parseMethod=\"com.widgetVendor.widgetTypes.support.parseZipCode\" printMethod=\"com.widgetVendor.widgetTypes.support.printZipCode\" /> </jaxb:bindings> </jaxb:bindings> <jaxb:bindings>", "<jaxb:bindings xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" jaxb:version=\"2.0\"> <jaxb:bindings schemaLocation=\"enumMap.xsd\"> <jaxb:bindings node=\"xsd:ComplexType[@name='widgetOrderInfo']\"> <jaxb:bindings node=\"xsd:element[@name='cost']\"> <jaxb:property> <jaxb:baseType> <jaxb:javaType name=\"com.widgetVendor.widgetTypes.costType\" parseMethod=\"parseCost\" printMethod=\"printCost\" > </jaxb:baseType> </jaxb:property> </jaxb:bindings> </jaxb:bindings> </jaxb:bindings> <jaxb:bindings>", "public class Adapter1 extends XmlAdapter<String, javaType > { public javaType unmarshal(String value) { return( parseMethod(value) ); } public String marshal( javaType value) { return( printMethod(value) ); } }", "@XmlElementDecl(namespace = \"http://widgetVendor.com/types/widgetTypes\", name = \"shorty\") @XmlJavaTypeAdapter(org.w3._2001.xmlschema.Adapter1.class) public JAXBElement<Integer> createShorty(Integer value) { return new JAXBElement<Integer>(_Shorty_QNAME, Integer.class, null, value); }", "public class NumInventory { @XmlElement(required = true, type = String.class) @XmlJavaTypeAdapter(Adapter1.class) @XmlSchemaType(name = \"short\") protected Integer numLeft; @XmlElement(required = true) protected String size; public Integer getNumLeft() { return numLeft; } public void setNumLeft(Integer value) { this.numLeft = value; } public String getSize() { return size; } public void setSize(String value) { this.size = value; } }", "<schema targetNamespace=\"http://widget.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" jaxb:version=\"2.0\"> <annotation> <appinfo> <jaxb:globalBindings mapSimpleTypeDef=\"true\" /> </appinfo> </annotation> </schema>", "<jaxb:bindings xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" jaxb:version=\"2.0\"> <jaxb:bindings schemaLocation=\"types.xsd\"> <jaxb:globalBindings mapSimpleTypeDef=\"true\" /> <jaxb:bindings> <jaxb:bindings>", "<simpleType name=\"simpleton\"> <restriction base=\"xsd:string\"> <maxLength value=\"10\"/> </restriction> </simpleType>", "@XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = \"simpleton\", propOrder = {\"value\"}) public class Simpleton { @XmlValue protected String value; public String getValue() { return value; } public void setValue(String value) { this.value = value; } }", "<schema targetNamespace=\"http://widget.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" jaxb:version=\"2.0\"> <annotation> <appinfo> <jaxb:globalBindings typesafeEnumMemberName=\"generateName\" /> </appinfo> </annotation> </schema>", "<schema targetNamespace=\"http://widget.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" jaxb:version=\"2.0\"> <simpleType name=\"widgetInteger\"> <annotation> <appinfo> <jaxb:typesafeEnumClass /> </appinfo> </annotation> <restriction base=\"xsd:int\"> <enumeration value=\"1\"> <annotation> <appinfo> <jaxb:typesafeEnumMember name=\"one\" /> </appinfo> </annotation> </enumeration> <enumeration value=\"2\"> <annotation> <appinfo> <jaxb:typesafeEnumMember name=\"two\" /> </appinfo> </annotation> </enumeration> <enumeration value=\"3\"> <annotation> <appinfo> <jaxb:typesafeEnumMember name=\"three\" /> </appinfo> </annotation> </enumeration> <enumeration value=\"4\"> <annotation> <appinfo> <jaxb:typesafeEnumMember name=\"four\" /> </appinfo> </annotation> </enumeration> </restriction> </simpleType> <schema>", "<schema targetNamespace=\"http://widget.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" jaxb:version=\"2.0\"> <simpleType name=\"widgetInteger\"> <annotation> <appinfo> <jaxb:typesafeEnumClass> <jaxb:typesafeEnumMember value=\"1\" name=\"one\" /> <jaxb:typesafeEnumMember value=\"2\" name=\"two\" /> <jaxb:typesafeEnumMember value=\"3\" name=\"three\" /> <jaxb:typesafeEnumMember value=\"4\" name=\"four\" /> </jaxb:typesafeEnumClass> </appinfo> </annotation> <restriction base=\"xsd:int\"> <enumeration value=\"1\" /> <enumeration value=\"2\" /> <enumeration value=\"3\" /> <enumeration value=\"4\" > </restriction> </simpleType> <schema>", "<jaxb:bindings xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" jaxb:version=\"2.0\"> <jaxb:bindings schemaLocation=\"enumMap.xsd\"> <jaxb:bindings node=\"xsd:simpleType[@name='widgetInteger']\"> <jaxb:typesafeEnumClass> <jaxb:typesafeEnumMember value=\"1\" name=\"one\" /> <jaxb:typesafeEnumMember value=\"2\" name=\"two\" /> <jaxb:typesafeEnumMember value=\"3\" name=\"three\" /> <jaxb:typesafeEnumMember value=\"4\" name=\"four\" /> </jaxb:typesafeEnumClass> </jaxb:bindings> </jaxb:bindings> <jaxb:bindings>", "<schema targetNamespace=\"http://widget.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" jaxb:version=\"2.0\"> <annotation> <appinfo> <jaxb:globalBindings fixedAttributeAsConstantProperty=\"true\" /> </appinfo> </annotation> </schema>", "<jaxb:bindings xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" jaxb:version=\"2.0\"> <jaxb:bindings schemaLocation=\"types.xsd\"> <jaxb:globalBindings fixedAttributeAsConstantProperty=\"true\" /> <jaxb:bindings> <jaxb:bindings>", "<schema targetNamespace=\"http://widget.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" jaxb:version=\"2.0\"> <complexType name=\"widgetAttr\"> <sequence> </sequence> <attribute name=\"fixer\" type=\"xsd:int\" fixed=\"7\"> <annotation> <appinfo> <jaxb:property fixedAttributeAsConstantProperty=\"true\" /> </appinfo> </annotation> </attribute> </complexType> </schema>", "<jaxb:bindings xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" jaxb:version=\"2.0\"> <jaxb:bindings schemaLocation=\"types.xsd\"> <jaxb:bindings node=\"xsd:complexType[@name='widgetAttr']\"> <jaxb:bindings node=\"xsd:attribute[@name='fixer']\"> <jaxb:property fixedAttributeAsConstantProperty=\"true\" /> </jaxb:bindings> </jaxb:bindings> </jaxb:bindings> <jaxb:bindings>", "@XmlAttribute public final static type NAME = value ;", "@XmlRootElement(name = \"widgetAttr\") public class WidgetAttr { @XmlAttribute public final static int FIXER = 7; }", "<complexType name=\"widgetOrderInfo\"> <all> <element name=\"amount\" type=\"xsd:int\" /> <element name=\"shippingAdress\" type=\"Address\"> <annotation> <appinfo> <jaxb:property> <jaxb:baseType name=\"java.lang.Object\" /> </jaxb:property> </appinfo> </annotation> </element> <element name=\"type\" type=\"xsd:string\"/> </all> </complexType>", "<jaxb:bindings xmlns:jaxb=\"http://java.sun.com/xml/ns/jaxb\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" jaxb:version=\"2.0\"> <jaxb:bindings schemaLocation=\"enumMap.xsd\"> <jaxb:bindings node=\"xsd:ComplexType[@name='widgetOrderInfo']\"> <jaxb:bindings node=\"xsd:element[@name='shippingAddress']\"> <jaxb:property> <jaxb:baseType name=\"java.lang.Object\" /> </jaxb:property> </jaxb:bindings> </jaxb:bindings> </jaxb:bindings> <jaxb:bindings>", "public class WidgetOrderInfo { protected int amount; @XmlElement(required = true) protected String type; @XmlElement(required = true, type = Address.class) protected Object shippingAddress; public Object getShippingAddress() { return shippingAddress; } public void setShippingAddress(Object value) { this.shippingAddress = value; } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/JAXWSCustomTypeMapping
Chapter 9. Getting started with XFS
Chapter 9. Getting started with XFS This is an overview of how to create and maintain XFS file systems. 9.1. The XFS file system XFS is a highly scalable, high-performance, robust, and mature 64-bit journaling file system that supports very large files and file systems on a single host. It is the default file system in Red Hat Enterprise Linux 8. XFS was originally developed in the early 1990s by SGI and has a long history of running on extremely large servers and storage arrays. The features of XFS include: Reliability Metadata journaling, which ensures file system integrity after a system crash by keeping a record of file system operations that can be replayed when the system is restarted and the file system remounted Extensive run-time metadata consistency checking Scalable and fast repair utilities Quota journaling. This avoids the need for lengthy quota consistency checks after a crash. Scalability and performance Supported file system size up to 1024 TiB Ability to support a large number of concurrent operations B-tree indexing for scalability of free space management Sophisticated metadata read-ahead algorithms Optimizations for streaming video workloads Allocation schemes Extent-based allocation Stripe-aware allocation policies Delayed allocation Space pre-allocation Dynamically allocated inodes Other features Reflink-based file copies Tightly integrated backup and restore utilities Online defragmentation Online file system growing Comprehensive diagnostics capabilities Extended attributes ( xattr ). This allows the system to associate several additional name/value pairs per file. Project or directory quotas. This allows quota restrictions over a directory tree. Subsecond timestamps Performance characteristics XFS has a high performance on large systems with enterprise workloads. A large system is one with a relatively high number of CPUs, multiple HBAs, and connections to external disk arrays. XFS also performs well on smaller systems that have a multi-threaded, parallel I/O workload. XFS has a relatively low performance for single threaded, metadata-intensive workloads: for example, a workload that creates or deletes large numbers of small files in a single thread. 9.2. Comparison of tools used with ext4 and XFS This section compares which tools to use to accomplish common tasks on the ext4 and XFS file systems. Task ext4 XFS Create a file system mkfs.ext4 mkfs.xfs File system check e2fsck xfs_repair Resize a file system resize2fs xfs_growfs Save an image of a file system e2image xfs_metadump and xfs_mdrestore Label or tune a file system tune2fs xfs_admin Back up a file system dump and restore xfsdump and xfsrestore Quota management quota xfs_quota File mapping filefrag xfs_bmap
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/getting-started-with-xfs_managing-file-systems
Chapter 150. KafkaMirrorMaker2Spec schema reference
Chapter 150. KafkaMirrorMaker2Spec schema reference Used in: KafkaMirrorMaker2 Property Property type Description version string The Kafka Connect version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version. replicas integer The number of pods in the Kafka Connect group. Defaults to 3 . image string The container image used for Kafka Connect pods. If no image name is explicitly specified, it is determined based on the spec.version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. connectCluster string The cluster alias used for Kafka Connect. The value must match the alias of the target Kafka cluster as specified in the spec.clusters configuration. The target Kafka cluster is used by the underlying Kafka Connect framework for its internal topics. clusters KafkaMirrorMaker2ClusterSpec array Kafka clusters for mirroring. mirrors KafkaMirrorMaker2MirrorSpec array Configuration of the MirrorMaker 2 connectors. resources ResourceRequirements The maximum limits for CPU and memory resources and the requested initial resources. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. jvmOptions JvmOptions JVM Options for pods. jmxOptions KafkaJmxOptions JMX Options. logging InlineLogging , ExternalLogging Logging configuration for Kafka Connect. clientRackInitImage string The image of the init container used for initializing the client.rack . rack Rack Configuration of the node label which will be used as the client.rack consumer configuration. metricsConfig JmxPrometheusExporterMetrics Metrics configuration. tracing JaegerTracing , OpenTelemetryTracing The configuration of tracing in Kafka Connect. template KafkaConnectTemplate Template for Kafka Connect and Kafka MirrorMaker 2 resources. The template allows users to specify how the Pods , Service , and other services are generated. externalConfiguration ExternalConfiguration The externalConfiguration property has been deprecated. The external configuration is deprecated and will be removed in the future. Please use the template section instead to configure additional environment variables or volumes. Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMaker2Spec-reference
Chapter 1. Red Hat Process Automation Manager project packaging
Chapter 1. Red Hat Process Automation Manager project packaging Red Hat Process Automation Manager projects contain the business assets that you develop in Red Hat Process Automation Manager. Each project in Red Hat Process Automation Manager is packaged as a Knowledge JAR (KJAR) file with configuration files such as a Maven project object model file ( pom.xml ), which contains build, environment, and other information about the project, and a KIE module descriptor file ( kmodule.xml ), which contains the KIE base and KIE session configurations for the assets in the project. You deploy the packaged KJAR file to a KIE Server that runs the decision services, process applications, and other deployable assets (collectively referred to as services ) from that KJAR file. These services are consumed at run time through an instantiated KIE container, or deployment unit . Project KJAR files are stored in a Maven repository and identified by three values: GroupId , ArtifactId , and Version (GAV). The Version value must be unique for every new version that might need to be deployed. To identify an artifact (including a KJAR file), you need all three GAV values. Projects in Business Central are packaged automatically when you build and deploy the projects. For projects outside of Business Central, such as independent Maven projects or projects within a Java application, you must configure the KIE module descriptor settings in an appended kmodule.xml file or directly in your Java application in order to build and deploy the projects.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/project-packaging-con_packaging-deploying
5.370. yum
5.370. yum 5.370.1. RHBA-2012:0857 - yum bug fix and enhancement update Updated yum packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. Yum is a command-line utility that allows the user to check for, and automatically download and install updated RPM packages. Bug Fixes BZ# 742363 The anacron scheduler starts the yum-cron utility with the default niceness value of 10. Consequently, Yum RPM transactions ran with a very low priority. Also, any updated service inherited this niceness value. This update adds the "reset_nice" configuration option, which allows Yum to reset the niceness value to 0 before running an RPM transaction. With this option set, Yum RPM transactions run and updated services are restarted with niceness value 0 as expected. BZ# 735234 When dependency resolving fails, yum performs RPMDB check to detect and report existing RPMDB problems. Previously, yum terminated unexpectedly if a PackageSackError exception was raised. The application now returns the message "Yum checks failed" when a PackageSackError is raised and the remaining RPMDB checks are skipped. BZ# 809392 The yum history rollback command could return a traceback if a history checksum was used for the rollback. This happened due to incorrect handling of keyword arguments in the _conv_pkg_state() function. The history checksum argument is now handled correctly. BZ# 711358 When yum was started in a directory that no longer existed, it terminated with a traceback. The yum utility now checks if the current working directory exists; if this is not the case, it changes to the root directory, and continues its execution as expected. BZ# 804120 If the "yum upgrade" command was run with the --sec-severity option arguments, the command execution could enter an infinite loop. The code has been fixed and the option works as expected. BZ# 770117 If user names and passwords for yum proxy server contained any of the characters "@", ":", or "%", they were not properly quoted in the proxy server URL and the values were misinterpreted by the HTTP client. As a result, yum failed to connect to the proxy server. This update adds proper quoting, and user names and passwords containing the characters are now resolved correctly. BZ# 809373 The Yum transactions in yum history were ordered according to their transaction time. However, this could be misleading. The transactions are now ordered according to their IDs. BZ# 769864 The "yum makecache" command could fail if one of the repositories had the "skip_if_unavailable=1" setting and was unavailable. Such repositories are now skipped as expected. BZ# 798215 The Yum utility could terminate unexpectedly with a traceback similar to the following: This happened because Yum failed to handle localized error messages with UTF-8 characters generated during package downloads. UTF-8 characters in error messages are now handled correctly and localized error messages are displayed as expected. BZ# 735333 On failure, the "yum clean" command returned an incorrect error code and output containing messages that implied that yum performed the clean action successfully. The yum utility now returns only the error message and the correct error code. BZ# 817491 If the "yum provides" command was invoked with an empty-string argument, yum terminated with a traceback. The command now returns an error message and command usage information. Enhancements BZ# 737826 Yum now prints "Verifying" messages after finishing updates, which inform the user that the respective packages were installed correctly. BZ# 690904 When run as a non-root user, yum cannot read local SSL certificate files and the download process can fail. The yum utility now checks if it can access repository certificate files. If the check fails, it returns more accurate messages containing the filename that failed the check and information that the repository was skipped. Users of yum should upgrade to these updated packages, which fix these bugs and add these enhancements.
[ "_init__.py:2000:downloadPkgs:UnicodeEncodeError: 'ascii' codec can't encode character" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/yum
Chapter 3. Kafka schema reference
Chapter 3. Kafka schema reference Property Property type Description spec KafkaSpec The specification of the Kafka and ZooKeeper clusters, and Topic Operator. status KafkaStatus The status of the Kafka and ZooKeeper clusters, and Topic Operator.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafka-reference
26.6. Installing Third-Party Certificates for HTTP or LDAP
26.6. Installing Third-Party Certificates for HTTP or LDAP Installing a new SSL server certificate for the Apache Web Server, the Directory Server, or both replaces the current SSL certificate with a new one. To do this, you need: your private SSL key ( ssl.key in the procedure below) your SSL certificate ( ssl.crt in the procedure below) For a list of accepted formats of the key and certificate, see the ipa-server-certinstall (1) man page. Prerequisites The ssl.crt certificate must be signed by a CA known by the service you are loading the certificate into. If this is not the case, install the CA certificate of the CA that signed ssl.crt into IdM, as described in Section 26.3, "Installing a CA Certificate Manually" . This ensures that IdM recognizes the CA, and thus accepts ssl.crt . Installing the Third-Party Certificate Use the ipa-server-certinstall utility to install the certificate. Specify where you want to install it: --http installs the certificate in the Apache Web Server --dirsrv installs the certificate on the Directory Server For example, to install the SSL certificate into both: Restart the server into which you installed the certificate. To restart the Apache Web Server: To restart the Directory Server: To verify that the certificate has been correctly installed, make sure it is present in the certificate database. To display the Apache certificate database: To display the Directory Server certificate database:
[ "ipa-server-certinstall --http --dirsrv ssl.key ssl.crt", "systemctl restart httpd.service", "systemctl restart dirsrv@ REALM .service", "certutil -L -d /etc/httpd/alias", "certutil -L -d /etc/dirsrv/slapd- REALM /" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/third-party-certs-http-ldap
4.15. RHEA-2012:0890 - new package: numad
4.15. RHEA-2012:0890 - new package: numad A new numad package is now available as a Technology Preview for Red Hat Enterprise Linux 6. The numad package provides a daemon for NUMA (Non-Uniform Memory Architecture) systems, that monitors NUMA characteristics. As an alternative to manual static CPU pining and memory assignment, numad provides dynamic adjustment to minimize memory latency on an ongoing basis. The package also provides an interface that can be used to query the numad daemon for the best manual placement of an application. This enhancement update adds the numad package to Red Hat Enterprise Linux 6 as a Technology preview. (BZ# 758416 , BZ# 824067 ) More information about Red Hat Technology Previews is available here: https://access.redhat.com/support/offerings/techpreview/ All users who want to use the numad Technology Preview should install this newly-released package, which adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/numad
Chapter 12. Using images
Chapter 12. Using images 12.1. Using images overview Use the following topics to discover the different Source-to-Image (S2I), database, and other container images that are available for OpenShift Container Platform users. Red Hat official container images are provided in the Red Hat Registry at registry.redhat.io . OpenShift Container Platform's supported S2I, database, and Jenkins images are provided in the openshift4 repository in the Red Hat Quay Registry. For example, quay.io/openshift-release-dev/ocp-v4.0-<address> is the name of the OpenShift Application Platform image. The xPaaS middleware images are provided in their respective product repositories on the Red Hat Registry but suffixed with a -openshift . For example, registry.redhat.io/jboss-eap-6/eap64-openshift is the name of the JBoss EAP image. All Red Hat supported images covered in this section are described in the Container images section of the Red Hat Ecosystem Catalog . For every version of each image, you can find details on its contents and usage. Browse or search for the image that interests you. Important The newer versions of container images are not compatible with earlier versions of OpenShift Container Platform. Verify and use the correct version of container images, based on your version of OpenShift Container Platform. 12.2. Configuring Jenkins images OpenShift Container Platform provides a container image for running Jenkins. This image provides a Jenkins server instance, which can be used to set up a basic flow for continuous testing, integration, and delivery. The image is based on the Red Hat Universal Base Images (UBI). OpenShift Container Platform follows the LTS release of Jenkins. OpenShift Container Platform provides an image that contains Jenkins 2.x. The OpenShift Container Platform Jenkins images are available on Quay.io or registry.redhat.io . For example: USD podman pull registry.redhat.io/openshift4/ose-jenkins:<v4.3.0> To use these images, you can either access them directly from these registries or push them into your OpenShift Container Platform container image registry. Additionally, you can create an image stream that points to the image, either in your container image registry or at the external location. Your OpenShift Container Platform resources can then reference the image stream. But for convenience, OpenShift Container Platform provides image streams in the openshift namespace for the core Jenkins image as well as the example Agent images provided for OpenShift Container Platform integration with Jenkins. 12.2.1. Configuration and customization You can manage Jenkins authentication in two ways: OpenShift Container Platform OAuth authentication provided by the OpenShift Container Platform Login plug-in. Standard authentication provided by Jenkins. 12.2.1.1. OpenShift Container Platform OAuth authentication OAuth authentication is activated by configuring options on the Configure Global Security panel in the Jenkins UI, or by setting the OPENSHIFT_ENABLE_OAUTH environment variable on the Jenkins Deployment configuration to anything other than false . This activates the OpenShift Container Platform Login plug-in, which retrieves the configuration information from pod data or by interacting with the OpenShift Container Platform API server. Valid credentials are controlled by the OpenShift Container Platform identity provider. Jenkins supports both browser and non-browser access. Valid users are automatically added to the Jenkins authorization matrix at log in, where OpenShift Container Platform roles dictate the specific Jenkins permissions that users have. The roles used by default are the predefined admin , edit , and view . The login plug-in executes self-SAR requests against those roles in the project or namespace that Jenkins is running in. Users with the admin role have the traditional Jenkins administrative user permissions. Users with the edit or view role have progressively fewer permissions. The default OpenShift Container Platform admin , edit , and view roles and the Jenkins permissions those roles are assigned in the Jenkins instance are configurable. When running Jenkins in an OpenShift Container Platform pod, the login plug-in looks for a config map named openshift-jenkins-login-plugin-config in the namespace that Jenkins is running in. If this plugin finds and can read in that config map, you can define the role to Jenkins Permission mappings. Specifically: The login plug-in treats the key and value pairs in the config map as Jenkins permission to OpenShift Container Platform role mappings. The key is the Jenkins permission group short ID and the Jenkins permission short ID, with those two separated by a hyphen character. If you want to add the Overall Jenkins Administer permission to an OpenShift Container Platform role, the key should be Overall-Administer . To get a sense of which permission groups and permissions IDs are available, go to the matrix authorization page in the Jenkins console and IDs for the groups and individual permissions in the table they provide. The value of the key and value pair is the list of OpenShift Container Platform roles the permission should apply to, with each role separated by a comma. If you want to add the Overall Jenkins Administer permission to both the default admin and edit roles, as well as a new Jenkins role you have created, the value for the key Overall-Administer would be admin,edit,jenkins . Note The admin user that is pre-populated in the OpenShift Container Platform Jenkins image with administrative privileges is not given those privileges when OpenShift Container Platform OAuth is used. To grant these permissions the OpenShift Container Platform cluster administrator must explicitly define that user in the OpenShift Container Platform identity provider and assigns the admin role to the user. Jenkins users' permissions that are stored can be changed after the users are initially established. The OpenShift Container Platform Login plug-in polls the OpenShift Container Platform API server for permissions and updates the permissions stored in Jenkins for each user with the permissions retrieved from OpenShift Container Platform. If the Jenkins UI is used to update permissions for a Jenkins user, the permission changes are overwritten the time the plug-in polls OpenShift Container Platform. You can control how often the polling occurs with the OPENSHIFT_PERMISSIONS_POLL_INTERVAL environment variable. The default polling interval is five minutes. The easiest way to create a new Jenkins service using OAuth authentication is to use a template. 12.2.1.2. Jenkins authentication Jenkins authentication is used by default if the image is run directly, without using a template. The first time Jenkins starts, the configuration is created along with the administrator user and password. The default user credentials are admin and password . Configure the default password by setting the JENKINS_PASSWORD environment variable when using, and only when using, standard Jenkins authentication. Procedure Create a Jenkins application that uses standard Jenkins authentication: USD oc new-app -e \ JENKINS_PASSWORD=<password> \ openshift4/ose-jenkins 12.2.2. Jenkins environment variables The Jenkins server can be configured with the following environment variables: Variable Definition Example values and settings OPENSHIFT_ENABLE_OAUTH Determines whether the OpenShift Container Platform Login plug-in manages authentication when logging in to Jenkins. To enable, set to true . Default: false JENKINS_PASSWORD The password for the admin user when using standard Jenkins authentication. Not applicable when OPENSHIFT_ENABLE_OAUTH is set to true . Default: password JAVA_MAX_HEAP_PARAM , CONTAINER_HEAP_PERCENT , JENKINS_MAX_HEAP_UPPER_BOUND_MB These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB. By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. JAVA_MAX_HEAP_PARAM example setting: -Xmx512m CONTAINER_HEAP_PERCENT default: 0.5 , or 50% JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB JAVA_INITIAL_HEAP_PARAM , CONTAINER_INITIAL_PERCENT These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size. By default, the JVM sets the initial heap size. JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m CONTAINER_INITIAL_PERCENT example setting: 0.1 , or 10% CONTAINER_CORE_LIMIT If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. Example setting: 2 JAVA_TOOL_OPTIONS Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true JAVA_GC_OPTS Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 JENKINS_JAVA_OVERRIDES Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and may be used to override any of them if necessary. Separate each additional option with a space; if any option contains space characters, escape them with a backslash. Example settings: -Dfoo -Dbar ; -Dfoo=first\ value -Dbar=second\ value . JENKINS_OPTS Specifies arguments to Jenkins. INSTALL_PLUGINS Specifies additional Jenkins plug-ins to install when the container is first run or when OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS is set to true . Plug-ins are specified as a comma-delimited list of name:version pairs. Example setting: git:3.7.0,subversion:2.10.2 . OPENSHIFT_PERMISSIONS_POLL_INTERVAL Specifies the interval in milliseconds that the OpenShift Container Platform Login plug-in polls OpenShift Container Platform for the permissions that are associated with each user that is defined in Jenkins. Default: 300000 - 5 minutes OVERRIDE_PV_CONFIG_WITH_IMAGE_CONFIG When running this image with an OpenShift Container Platform persistent volume (PV) for the Jenkins configuration directory, the transfer of configuration from the image to the PV is performed only the first time the image starts because the PV is assigned when the persistent volume claim (PVC) is created. If you create a custom image that extends this image and updates configuration in the custom image after the initial startup, the configuration is not copied over unless you set this environment variable to true . Default: false OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS When running this image with an OpenShift Container Platform PV for the Jenkins configuration directory, the transfer of plug-ins from the image to the PV is performed only the first time the image starts because the PV is assigned when the PVC is created. If you create a custom image that extends this image and updates plug-ins in the custom image after the initial startup, the plug-ins are not copied over unless you set this environment variable to true . Default: false ENABLE_FATAL_ERROR_LOG_FILE When running this image with an OpenShift Container Platform PVC for the Jenkins configuration directory, this environment variable allows the fatal error log file to persist when a fatal error occurs. The fatal error file is saved at /var/lib/jenkins/logs . Default: false NODEJS_SLAVE_IMAGE Setting this value overrides the image that is used for the default Node.js agent pod configuration. A related image stream tag named jenkins-agent-nodejs is in the project. This variable must be set before Jenkins starts the first time for it to have an effect. Default Node.js agent image in Jenkins server: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-nodejs:latest MAVEN_SLAVE_IMAGE Setting this value overrides the image used for the default maven agent pod configuration. A related image stream tag named jenkins-agent-maven is in the project. This variable must be set before Jenkins starts the first time for it to have an effect. Default Maven agent image in Jenkins server: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-maven:latest 12.2.3. Providing Jenkins cross project access If you are going to run Jenkins somewhere other than your same project, you must provide an access token to Jenkins to access your project. Procedure Identify the secret for the service account that has appropriate permissions to access the project Jenkins must access: USD oc describe serviceaccount jenkins Example output Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp In this case the secret is named jenkins-token-uyswp . Retrieve the token from the secret: USD oc describe secret <secret name from above> Example output Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA The token parameter contains the token value Jenkins requires to access the project. 12.2.4. Jenkins cross volume mount points The Jenkins image can be run with mounted volumes to enable persistent storage for the configuration: /var/lib/jenkins is the data directory where Jenkins stores configuration files, including job definitions. 12.2.5. Customizing the Jenkins image through source-to-image To customize the official OpenShift Container Platform Jenkins image, you can use the image as a source-to-image (S2I) builder. You can use S2I to copy your custom Jenkins jobs definitions, add additional plug-ins, or replace the provided config.xml file with your own, custom, configuration. To include your modifications in the Jenkins image, you must have a Git repository with the following directory structure: plugins This directory contains those binary Jenkins plug-ins you want to copy into Jenkins. plugins.txt This file lists the plug-ins you want to install using the following syntax: configuration/jobs This directory contains the Jenkins job definitions. configuration/config.xml This file contains your custom Jenkins configuration. The contents of the configuration/ directory is copied to the /var/lib/jenkins/ directory, so you can also include additional files, such as credentials.xml , there. Sample build configuration customizes the Jenkins image in OpenShift Container Platform apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest 1 The source parameter defines the source Git repository with the layout described above. 2 The strategy parameter defines the original Jenkins image to use as a source image for the build. 3 The output parameter defines the resulting, customized Jenkins image that you can use in deployment configurations instead of the official Jenkins image. 12.2.6. Configuring the Jenkins Kubernetes plug-in The OpenShift Container Platform Jenkins image includes the pre-installed Kubernetes plug-in that allows Jenkins agents to be dynamically provisioned on multiple container hosts using Kubernetes and OpenShift Container Platform. To use the Kubernetes plug-in, OpenShift Container Platform provides images that are suitable for use as Jenkins agents including the Base, Maven, and Node.js images. Both the Maven and Node.js agent images are automatically configured as Kubernetes pod template images within the OpenShift Container Platform Jenkins image configuration for the Kubernetes plug-in. That configuration includes labels for each of the images that can be applied to any of your Jenkins jobs under their Restrict where this project can be run setting. If the label is applied, jobs run under an OpenShift Container Platform pod running the respective agent image. The Jenkins image also provides auto-discovery and auto-configuration of additional agent images for the Kubernetes plug-in. With the OpenShift Container Platform sync plug-in, the Jenkins image on Jenkins startup searches for the following within the project that it is running or the projects specifically listed in the plug-in's configuration: Image streams that have the label role set to jenkins-agent . Image stream tags that have the annotation role set to jenkins-agent . Config maps that have the label role set to jenkins-agent . When it finds an image stream with the appropriate label, or image stream tag with the appropriate annotation, it generates the corresponding Kubernetes plug-in configuration so you can assign your Jenkins jobs to run in a pod that runs the container image that is provided by the image stream. The name and image references of the image stream or image stream tag are mapped to the name and image fields in the Kubernetes plug-in pod template. You can control the label field of the Kubernetes plug-in pod template by setting an annotation on the image stream or image stream tag object with the key agent-label . Otherwise, the name is used as the label. Note Do not log in to the Jenkins console and modify the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plug-in detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. When it finds a config map with the appropriate label, it assumes that any values in the key-value data payload of the config map contains Extensible Markup Language (XML) that is consistent with the configuration format for Jenkins and the Kubernetes plug-in pod templates. A key differentiator to note when using config maps, instead of image streams or image stream tags, is that you can control all the parameters of the Kubernetes plug-in pod template. Sample config map for jenkins-agent kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> Note If you log in to the Jenkins console and make further changes to the pod template configuration after the pod template is created, and the OpenShift Container Platform Sync plug-in detects that the config map has changed, it will replace the pod template and overwrite those configuration changes. You cannot merge a new configuration with the existing configuration. Do not log in to the Jenkins console and modify the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plug-in detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. After it is installed, the OpenShift Container Platform Sync plug-in monitors the API server of OpenShift Container Platform for updates to image streams, image stream tags, and config maps and adjusts the configuration of the Kubernetes plug-in. The following rules apply: Removing the label or annotation from the config map, image stream, or image stream tag results in the deletion of any existing PodTemplate from the configuration of the Kubernetes plug-in. If those objects are removed, the corresponding configuration is removed from the Kubernetes plug-in. Either creating appropriately labeled or annotated ConfigMap , ImageStream , or ImageStreamTag objects, or the adding of labels after their initial creation, leads to creating of a PodTemplate in the Kubernetes-plugin configuration. In the case of the PodTemplate by config map form, changes to the config map data for the PodTemplate are applied to the PodTemplate settings in the Kubernetes plug-in configuration and overrides any changes that were made to the PodTemplate through the Jenkins UI between changes to the config map. To use a container image as a Jenkins agent, the image must run the agent as an entrypoint. For more details about this, refer to the official Jenkins documentation . 12.2.7. Jenkins permissions If in the config map the <serviceAccount> element of the pod template XML is the OpenShift Container Platform service account used for the resulting pod, the service account credentials are mounted into the pod. The permissions are associated with the service account and control which operations against the OpenShift Container Platform master are allowed from the pod. Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plug-in that runs in the OpenShift Container Platform Jenkins image. If you use the example template for Jenkins that is provided by OpenShift Container Platform, the jenkins service account is defined with the edit role for the project Jenkins runs in, and the master Jenkins pod has that service account mounted. The two default Maven and NodeJS pod templates that are injected into the Jenkins configuration are also set to use the same service account as the Jenkins master. Any pod templates that are automatically discovered by the OpenShift Container Platform sync plug-in because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account. For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plug-in, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the podTemplate pipeline DSL that is provided by the Kubernetes plug-in, or labeling a config map whose data is the XML configuration for a pod template. If you do not specify a value for the service account, the default service account is used. Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within OpenShift Container Platform to manipulate whatever projects you choose to manipulate from the within the pod. 12.2.8. Creating a Jenkins service from a template Templates provide parameter fields to define all the environment variables with predefined default values. OpenShift Container Platform provides templates to make creating a new Jenkins service easy. The Jenkins templates should be registered in the default openshift project by your cluster administrator during the initial cluster setup. The two available templates both define deployment configuration and a service. The templates differ in their storage strategy, which affects whether or not the Jenkins content persists across a pod restart. Note A pod might be restarted when it is moved to another node or when an update of the deployment configuration triggers a redeployment. jenkins-ephemeral uses ephemeral storage. On pod restart, all data is lost. This template is only useful for development or testing. jenkins-persistent uses a Persistent Volume (PV) store. Data survives a pod restart. To use a PV store, the cluster administrator must define a PV pool in the OpenShift Container Platform deployment. After you select which template you want, you must instantiate the template to be able to use Jenkins. Procedure Create a new Jenkins application using one of the following methods: A PV: USD oc new-app jenkins-persistent Or an emptyDir type volume where configuration does not persist across pod restarts: USD oc new-app jenkins-ephemeral 12.2.9. Using the Jenkins Kubernetes plug-in In the following example, the openshift-jee-sample BuildConfig object causes a Jenkins Maven agent pod to be dynamically provisioned. The pod clones some Java source code, builds a WAR file, and causes a second BuildConfig , openshift-jee-sample-docker to run. The second BuildConfig layers the new WAR file into a container image. Sample BuildConfig that uses the Jenkins Kubernetes plug-in kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node("maven") { sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } triggers: - type: ConfigChange It is also possible to override the specification of the dynamically created Jenkins agent pod. The following is a modification to the example, which overrides the container memory and specifies an environment variable. Sample BuildConfig that uses the Jenkins Kubernetes Plug-in, specifying memory limit and environment variable kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: "mypod", 1 cloud: "openshift", 2 inheritFrom: "maven", 3 containers: [ containerTemplate(name: "jnlp", 4 image: "openshift/jenkins-agent-maven-35-centos7:v3.10", 5 resourceRequestMemory: "512Mi", 6 resourceLimitMemory: "512Mi", 7 envVars: [ envVar(key: "CONTAINER_HEAP_PERCENT", value: "0.25") 8 ]) ]) { node("mypod") { 9 sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } } triggers: - type: ConfigChange 1 A new pod template called mypod is defined dynamically. The new pod template name is referenced in the node stanza. 2 The cloud value must be set to openshift . 3 The new pod template can inherit its configuration from an existing pod template. In this case, inherited from the Maven pod template that is pre-defined by OpenShift Container Platform. 4 This example overrides values in the pre-existing container, and must be specified by name. All Jenkins agent images shipped with OpenShift Container Platform use the Container name jnlp . 5 Specify the container image name again. This is a known issue. 6 A memory request of 512 Mi is specified. 7 A memory limit of 512 Mi is specified. 8 An environment variable CONTAINER_HEAP_PERCENT , with value 0.25 , is specified. 9 The node stanza references the name of the defined pod template. By default, the pod is deleted when the build completes. This behavior can be modified with the plug-in or within a pipeline Jenkinsfile. 12.2.10. Jenkins memory requirements When deployed by the provided Jenkins Ephemeral or Jenkins Persistent templates, the default memory limit is 1 Gi . By default, all other process that run in the Jenkins container cannot use more than a total of 512 MiB of memory. If they require more memory, the container halts. It is therefore highly recommended that pipelines run external commands in an agent container wherever possible. And if Project quotas allow for it, see recommendations from the Jenkins documentation on what a Jenkins master should have from a memory perspective. Those recommendations proscribe to allocate even more memory for the Jenkins master. It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plug-in. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis. You can increase the amount of memory available to Jenkins by overriding the MEMORY_LIMIT parameter when instantiating the Jenkins Ephemeral or Jenkins Persistent template. 12.2.11. Additional resources See Base image options for more information on the Red Hat Universal Base Images (UBI). 12.3. Jenkins agent OpenShift Container Platform provides three images that are suitable for use as Jenkins agents: the Base , Maven , and Node.js images. The first is a base image for Jenkins agents: It pulls in both the required tools, headless Java, the Jenkins JNLP client, and the useful ones including git , tar , zip , and nss among others. It establishes the JNLP agent as the entrypoint. It includes the oc client tooling for invoking command line operations from within Jenkins jobs. It provides Dockerfiles for both Red Hat Enterprise Linux (RHEL) and localdev images. Two more images that extend the base image are also provided: Maven v3.5 image Node.js v10 image and Node.js v12 image The Maven and Node.js Jenkins agent images provide Dockerfiles for the Universal Base Image (UBI) that you can reference when building new agent images. Also note the contrib and contrib/bin subdirectories. They allow for the insertion of configuration files and executable scripts for your image. Important Use and extend an appropriate agent image version for the your of OpenShift Container Platform. If the oc client version that is embedded in the agent image is not compatible with the OpenShift Container Platform version, unexpected behavior can result. 12.3.1. Jenkins agent images The OpenShift Container Platform Jenkins agent images are available on Quay.io or registry.redhat.io . Jenkins images are available through the Red Hat Registry: USD docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.5.0> USD docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-10-rhel7:<v4.5.0> USD docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-12-rhel7:<v4.5.0> USD docker pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:<v4.5.0> USD docker pull registry.redhat.io/openshift4/ose-jenkins-agent-base:<v4.5.0> To use these images, you can either access them directly from Quay.io or registry.redhat.io or push them into your OpenShift Container Platform container image registry. 12.3.2. Jenkins agent environment variables Each Jenkins agent container can be configured with the following environment variables. Variable Definition Example values and settings JAVA_MAX_HEAP_PARAM , CONTAINER_HEAP_PERCENT , JENKINS_MAX_HEAP_UPPER_BOUND_MB These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB. By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. JAVA_MAX_HEAP_PARAM example setting: -Xmx512m CONTAINER_HEAP_PERCENT default: 0.5 , or 50% JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB JAVA_INITIAL_HEAP_PARAM , CONTAINER_INITIAL_PERCENT These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size. By default, the JVM sets the initial heap size. JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m CONTAINER_INITIAL_PERCENT example setting: 0.1 , or 10% CONTAINER_CORE_LIMIT If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. Example setting: 2 JAVA_TOOL_OPTIONS Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true JAVA_GC_OPTS Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 JENKINS_JAVA_OVERRIDES Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and can be used to override any of them, if necessary. Separate each additional option with a space and if any option contains space characters, escape them with a backslash. Example settings: -Dfoo -Dbar ; -Dfoo=first\ value -Dbar=second\ value USE_JAVA_VERSION Specifies the version of Java version to use to run the agent in its container. The container base image has two versions of java installed: java-11 and java-1.8.0 . If you extend the container base image, you can specify any alternative version of java using its associated suffix. The default value is java-11 . Example setting: java-1.8.0 12.3.3. Jenkins agent memory requirements A JVM is used in all Jenkins agents to host the Jenkins JNLP agent as well as to run any Java applications such as javac , Maven, or Gradle. By default, the Jenkins JNLP agent JVM uses 50% of the container memory limit for its heap. This value can be modified by the CONTAINER_HEAP_PERCENT environment variable. It can also be capped at an upper limit or overridden entirely. By default, any other processes run in the Jenkins agent container, such as shell scripts or oc commands run from pipelines, cannot use more than the remaining 50% memory limit without provoking an OOM kill. By default, each further JVM process that runs in a Jenkins agent container uses up to 25% of the container memory limit for its heap. It might be necessary to tune this limit for many build workloads. 12.3.4. Jenkins agent Gradle builds Hosting Gradle builds in the Jenkins agent on OpenShift Container Platform presents additional complications because in addition to the Jenkins JNLP agent and Gradle JVMs, Gradle spawns a third JVM to run tests if they are specified. The following settings are suggested as a starting point for running Gradle builds in a memory constrained Jenkins agent on OpenShift Container Platform. You can modify these settings as required. Ensure the long-lived Gradle daemon is disabled by adding org.gradle.daemon=false to the gradle.properties file. Disable parallel build execution by ensuring org.gradle.parallel=true is not set in the gradle.properties file and that --parallel is not set as a command line argument. To prevent Java compilations running out-of-process, set java { options.fork = false } in the build.gradle file. Disable multiple additional test processes by ensuring test { maxParallelForks = 1 } is set in the build.gradle file. Override the Gradle JVM memory parameters by the GRADLE_OPTS , JAVA_OPTS or JAVA_TOOL_OPTIONS environment variables. Set the maximum heap size and JVM arguments for any Gradle test JVM by defining the maxHeapSize and jvmArgs settings in build.gradle , or through the -Dorg.gradle.jvmargs command line argument. 12.3.5. Jenkins agent pod retention Jenkins agent pods, are deleted by default after the build completes or is stopped. This behavior can be changed by the Kubernetes plug-in pod retention setting. Pod retention can be set for all Jenkins builds, with overrides for each pod template. The following behaviors are supported: Always keeps the build pod regardless of build result. Default uses the plug-in value, which is the pod template only. Never always deletes the pod. On Failure keeps the pod if it fails during the build. You can override pod retention in the pipeline Jenkinsfile: podTemplate(label: "mypod", cloud: "openshift", inheritFrom: "maven", podRetention: onFailure(), 1 containers: [ ... ]) { node("mypod") { ... } } 1 Allowed values for podRetention are never() , onFailure() , always() , and default() . Warning Pods that are kept might continue to run and count against resource quotas. 12.4. Source-to-image You can use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. You can use the Red Hat Java Source-to-Image for OpenShift documentation as a reference for runtime environments that use Java. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code. S2I images include: .NET Java Go Node.js Perl PHP Python Ruby S2I images are available for you to use directly from the OpenShift Container Platform web console by following procedure: Log in to the OpenShift Container Platform web console using your login credentials. The default view for the OpenShift Container Platform web console is the Administrator perspective. Use the perspective switcher to switch to the Developer perspective. In the +Add view, select an existing project from the list or use the Project drop-down list to create a new project. Choose All services under the tile Developer Catalog . Select the Type Builder Images then can see the available S2I images. S2I images are also available though the Configuring the Cluster Samples Operator . 12.4.1. Source-to-image build process overview Source-to-image (S2I) produces ready-to-run images by injecting source code into a container that prepares that source code to be run. It performs the following steps: Runs the FROM <builder image> command Copies the source code to a defined location in the builder image Runs the assemble script in the builder image Sets the run script in the builder image as the default command Buildah then creates the container image. 12.4.2. Additional resources For instructions on using the Cluster Samples Operator, see the Configuring the Cluster Samples Operator . For more information on S2I builds, see the builds strategy documentation on S2I builds . For troubleshooting assistance for the S2I process, see Troubleshooting the Source-to-Image process . For an overview of creating images with S2I, see Creating images from source code with source-to-image . For an overview of testing S2I images, see About testing S2I images . 12.5. Customizing source-to-image images Source-to-image (S2I) builder images include assemble and run scripts, but the default behavior of those scripts is not suitable for all users. You can customize the behavior of an S2I builder that includes default scripts. 12.5.1. Invoking scripts embedded in an image Builder images provide their own version of the source-to-image (S2I) scripts that cover the most common use-cases. If these scripts do not fulfill your needs, S2I provides a way of overriding them by adding custom ones in the .s2i/bin directory. However, by doing this, you are completely replacing the standard scripts. In some cases, replacing the scripts is acceptable, but, in other scenarios, you can run a few commands before or after the scripts while retaining the logic of the script provided in the image. To reuse the standard scripts, you can create a wrapper script that runs custom logic and delegates further work to the default scripts in the image. Procedure Look at the value of the io.openshift.s2i.scripts-url label to determine the location of the scripts inside of the builder image: USD podman inspect --format='{{ index .Config.Labels "io.openshift.s2i.scripts-url" }}' wildfly/wildfly-centos7 Example output image:///usr/libexec/s2i You inspected the wildfly/wildfly-centos7 builder image and found out that the scripts are in the /usr/libexec/s2i directory. Create a script that includes an invocation of one of the standard scripts wrapped in other commands: .s2i/bin/assemble script #!/bin/bash echo "Before assembling" /usr/libexec/s2i/assemble rc=USD? if [ USDrc -eq 0 ]; then echo "After successful assembling" else echo "After failed assembling" fi exit USDrc This example shows a custom assemble script that prints the message, runs the standard assemble script from the image, and prints another message depending on the exit code of the assemble script. Important When wrapping the run script, you must use exec for invoking it to ensure signals are handled properly. The use of exec also precludes the ability to run additional commands after invoking the default image run script. .s2i/bin/run script #!/bin/bash echo "Before running application" exec /usr/libexec/s2i/run
[ "podman pull registry.redhat.io/openshift4/ose-jenkins:<v4.3.0>", "oc new-app -e JENKINS_PASSWORD=<password> openshift4/ose-jenkins", "oc describe serviceaccount jenkins", "Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp", "oc describe secret <secret name from above>", "Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA", "pluginId:pluginVersion", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest", "kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>", "oc new-app jenkins-persistent", "oc new-app jenkins-ephemeral", "kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange", "docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.5.0>", "docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-10-rhel7:<v4.5.0>", "docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-12-rhel7:<v4.5.0>", "docker pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:<v4.5.0>", "docker pull registry.redhat.io/openshift4/ose-jenkins-agent-base:<v4.5.0>", "podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }", "podman inspect --format='{{ index .Config.Labels \"io.openshift.s2i.scripts-url\" }}' wildfly/wildfly-centos7", "image:///usr/libexec/s2i", "#!/bin/bash echo \"Before assembling\" /usr/libexec/s2i/assemble rc=USD? if [ USDrc -eq 0 ]; then echo \"After successful assembling\" else echo \"After failed assembling\" fi exit USDrc", "#!/bin/bash echo \"Before running application\" exec /usr/libexec/s2i/run" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/images/using-images
Chapter 2. Managing Ansible Automation Platform licensing, updates and support
Chapter 2. Managing Ansible Automation Platform licensing, updates and support Ansible is an open source software project and is licensed under the GNU General Public License version 3, as described in the Ansible Source Code . You must have valid subscriptions attached before installing Ansible Automation Platform. For more information, see Attaching Subscriptions . 2.1. Trial and evaluation A license is required to run Ansible Automation Platform. You can start by using a free trial license. Trial licenses for Ansible Automation Platform are available at the Red Hat product trial center . Support is not included in a trial license or during an evaluation of the Ansible Automation Platform. 2.2. Component licenses To view the license information for the components included in Ansible Automation Platform, see /usr/share/doc/automation-controller-<version>/README . where <version> refers to the version of automation controller you have installed. To view a specific license, see /usr/share/doc/automation-controller-<version>/*.txt . where * is the license file name to which you are referring. 2.3. Node counting in licenses The Ansible Automation Platform license defines the number of Managed Nodes that can be managed as part of your subscription. A typical license says "License Count: 500", which sets the maximum number of Managed Nodes at 500. For more information about managed node requirements for licensing, see How are "managed nodes" defined as part of the Red Hat Ansible Automation Platform offering . Note Ansible does not recycle node counts or reset automated hosts. 2.4. Subscription Types Red Hat Ansible Automation Platform is provided at various levels of support and number of machines as an annual subscription. Standard: Manage any size environment Enterprise 8x5 support and SLA Maintenance and upgrades included Review the SLA at Product Support Terms of Service Review the Red Hat Support Severity Level Definitions Premium: Manage any size environment, including mission-critical environments Premium 24x7 support and SLA Maintenance and upgrades included Review the SLA at Product Support Terms of Service Review the Red Hat Support Severity Level Definitions All subscription levels include regular updates and releases of automation controller, Ansible, and any other components of the Ansible Automation Platform. For more information, contact Ansible through the Red Hat Customer Portal or at the Ansible site . 2.5. Attaching your Red Hat Ansible Automation Platform subscription You must have valid subscriptions attached on all nodes before installing Red Hat Ansible Automation Platform. Attaching your Ansible Automation Platform subscription provides access to subscription-only resources necessary to proceed with the installation. Procedure Make sure your system is registered: USD sudo subscription-manager register --username <USDINSERT_USERNAME_HERE> --password <USDINSERT_PASSWORD_HERE> Obtain the pool_id for your Red Hat Ansible Automation Platform subscription: USD sudo subscription-manager list --available --all | grep "Ansible Automation Platform" -B 3 -A 6 Note Do not use MCT4022 as a pool_id for your subscription because it can cause Ansible Automation Platform subscription attachment to fail. Example An example output of the subsciption-manager list command. Obtain the pool_id as seen in the Pool ID: section: Subscription Name: Red Hat Ansible Automation, Premium (5000 Managed Nodes) Provides: Red Hat Ansible Engine Red Hat Ansible Automation Platform SKU: MCT3695 Contract: ```` Pool ID: <pool_id> Provides Management: No Available: 4999 Suggested: 1 Attach the subscription: USD sudo subscription-manager attach --pool=<pool_id> You have now attached your Red Hat Ansible Automation Platform subscriptions to all nodes. To remove this subscription, enter the following command: USD sudo subscription-manager remove --pool=<pool_id> Verification Verify the subscription was successfully attached: USD sudo subscription-manager list --consumed Troubleshooting If you are unable to locate certain packages that came bundled with the Ansible Automation Platform installer, or if you are seeing a Repositories disabled by configuration message, try enabling the repository by using the command: Red Hat Ansible Automation Platform 2.5 for RHEL 8 USD sudo subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms Red Hat Ansible Automation Platform 2.5 for RHEL 9 USD sudo subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms 2.6. Obtaining a manifest file You can obtain a subscription manifest in the Subscription Allocations section of Red Hat Subscription Management. After you obtain a subscription allocation, you can download its manifest file and upload it to activate Ansible Automation Platform. To begin, login to the Red Hat Customer Portal using your administrator user account and follow the procedures in this section. 2.6.1. Create a subscription allocation Creating a new subscription allocation allows you to set aside subscriptions and entitlements for a system that is currently offline or air-gapped. This is necessary before you can download its manifest and upload it to Ansible Automation Platform. Procedure From the Subscription Allocations page, click New Subscription Allocation . Enter a name for the allocation so that you can find it later. Select Type: Satellite 6.16 as the management application. Click Create . steps Add the subscriptions needed for Ansible Automation Platform to run properly. 2.6.2. Adding subscriptions to a subscription allocation Once an allocation is created, you can add the subscriptions you need for Ansible Automation Platform to run properly. This step is necessary before you can download the manifest and add it to Ansible Automation Platform. Procedure From the Subscription Allocations page, click on the name of the Subscription Allocation to which you would like to add a subscription. Click the Subscriptions tab. Click Add Subscriptions . Enter the number of Ansible Automation Platform Entitlement(s) you plan to add. Click Submit . steps Download the manifest file from Red Hat Subscription Management. 2.6.3. Downloading a manifest file After an allocation is created and has the appropriate subscriptions on it, you can download the manifest from Red Hat Subscription Management. Procedure From the Subscription Allocations page, click on the name of the Subscription Allocation to which you would like to generate a manifest. Click the Subscriptions tab. Click Export Manifest to download the manifest file. This downloads a file manifest <allocation name>_<date>.zip_ to your default downloads folder. steps Upload the manifest file to activate Red Hat Ansible Automation Platform. 2.7. Activating Red Hat Ansible Automation Platform Red Hat Ansible Automation Platform uses available subscriptions or a subscription manifest to authorize the use of Ansible Automation Platform. To obtain a subscription, you can do either of the following: Use your Red Hat customer or Satellite credentials when you launch Ansible Automation Platform. Upload a subscriptions manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible playbook. 2.7.1. Activate with credentials When Ansible Automation Platform launches for the first time, the Ansible Automation Platform Subscription screen automatically displays. You can use your Red Hat credentials to retrieve and import your subscription directly into Ansible Automation Platform. Note You are opted in for Automation Analytics by default when you activate the platform on first time log in. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out, after activating Ansible Automation Platform, by doing the following: From the navigation panel, select Settings System . Clear the Gather data for Automation Analytics option. Click Save . Procedure Log in to Red Hat Ansible Automation Platform. Select Username / password . Enter your Red Hat username and password. Select your subscription from the Subscription list. Note You can also use your Satellite username and password if your cluster nodes are registered to Satellite through Subscription Manager. Review the End User License Agreement and select I agree to the End User License Agreement . Click Finish . Verification After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status will show as Out of Compliance , indicating you have exceeded the number of hosts in your subscription. Other important information displayed include the following: Hosts automated Host count automated by the job, which consumes the license count Hosts imported Host count considering all inventory sources (does not impact hosts remaining) Hosts remaining Total host count minus hosts automated 2.7.2. Activate with a manifest file If you have a subscriptions manifest, you can upload the manifest file either by using the Red Hat Ansible Automation Platform interface. Note You are opted in for Automation Analytics by default when you activate the platform on first time log in. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out, after activating Ansible Automation Platform, by doing the following: From the navigation panel, select Settings System . Uncheck the Gather data for Automation Analytics option. Click Save . Prerequisites You must have a Red Hat Subscription Manifest file exported from the Red Hat Customer Portal. For more information, see Obtaining a manifest file . Procedure Log in to Red Hat Ansible Automation Platform. If you are not immediately prompted for a manifest file, go to Settings Subscription . Select Subscription manifest . Click Browse and select the manifest file. Review the End User License Agreement and select I agree to the End User License Agreement . Click Finish . Note If the BROWSE button is disabled on the License page, clear the USERNAME and PASSWORD fields. Verification After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status will show as Out of Compliance , indicating you have exceeded the number of hosts in your subscription. Other important information displayed include the following: Hosts automated Host count automated by the job, which consumes the license count Hosts imported Host count considering all inventory sources (does not impact hosts remaining) Hosts remaining Total host count minus hosts automated steps You can return to the license screen by selecting Settings Subscription from the navigation panel and clicking Edit subscription .
[ "sudo subscription-manager register --username <USDINSERT_USERNAME_HERE> --password <USDINSERT_PASSWORD_HERE>", "sudo subscription-manager list --available --all | grep \"Ansible Automation Platform\" -B 3 -A 6", "Subscription Name: Red Hat Ansible Automation, Premium (5000 Managed Nodes) Provides: Red Hat Ansible Engine Red Hat Ansible Automation Platform SKU: MCT3695 Contract: ```` Pool ID: <pool_id> Provides Management: No Available: 4999 Suggested: 1", "sudo subscription-manager attach --pool=<pool_id>", "sudo subscription-manager remove --pool=<pool_id>", "sudo subscription-manager list --consumed", "sudo subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms", "sudo subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/access_management_and_authentication/assembly-gateway-licensing
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the certificate authority (CA) to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in the Planning guide . Important In order perform stop and start node operations, or to create or add a machine pool, it is necessary to apply proper labeling. For example: Replace <cluster-name> with the cluster name and <machinepool-name> with the machine pool name.
[ "rosa edit machinepool --cluster <cluster-name> --labels cluster.ocs.openshift.io/openshift-storage=\"\" <machinepool-name>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/preparing_to_deploy_openshift_data_foundation